Test Report: Docker_Linux_containerd_arm64 18350

                    
                      b07500d1f25ef3b9b4cf5a8c10c74b3642cd60ca:2024-03-11:33512
                    
                

Test fail (8/335)

x
+
TestAddons/parallel/Ingress (35.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-109866 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-109866 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-109866 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [19732525-3c06-4ff6-b1c0-047b7f086a71] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [19732525-3c06-4ff6-b1c0-047b7f086a71] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003722142s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-109866 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.063510726s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-109866 addons disable ingress --alsologtostderr -v=1: (7.783234672s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-109866
helpers_test.go:235: (dbg) docker inspect addons-109866:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d",
	        "Created": "2024-03-11T12:47:54.055346443Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 747758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-11T12:47:54.367201973Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d/hosts",
	        "LogPath": "/var/lib/docker/containers/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d-json.log",
	        "Name": "/addons-109866",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-109866:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-109866",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/142c08e208268212c96ed1c5ca80c49e40d70b844e297dd4d382cf0169a2b2ba-init/diff:/var/lib/docker/overlay2/361ff7146c1f8f9f5c07c69a78aa76c291e59293e7654dd235648b6a877bb54d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/142c08e208268212c96ed1c5ca80c49e40d70b844e297dd4d382cf0169a2b2ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/142c08e208268212c96ed1c5ca80c49e40d70b844e297dd4d382cf0169a2b2ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/142c08e208268212c96ed1c5ca80c49e40d70b844e297dd4d382cf0169a2b2ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-109866",
	                "Source": "/var/lib/docker/volumes/addons-109866/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-109866",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-109866",
	                "name.minikube.sigs.k8s.io": "addons-109866",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a21c945f9a0ebc9c10525fc36801b5b72743725b6baeedb03335921c78a575e6",
	            "SandboxKey": "/var/run/docker/netns/a21c945f9a0e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33743"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33742"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33739"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33741"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33740"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-109866": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "53572c512cfb",
	                        "addons-109866"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "ab1cf46bde62886ba4c93b59c9a4335370cc92462b41b31afd1f8a70abc84310",
	                    "EndpointID": "28960e5aab80580d2b356d6c37c1ff6d4036ccfbd32f9fed0d11de957c5f4157",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-109866",
	                        "53572c512cfb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-109866 -n addons-109866
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-109866 logs -n 25: (1.44396257s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-568522                                                                     | download-only-568522   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-228434                                                                     | download-only-228434   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-628520                                                                     | download-only-628520   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | --download-only -p                                                                          | download-docker-201665 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | download-docker-201665                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-201665                                                                   | download-docker-201665 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-995452   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | binary-mirror-995452                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34573                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-995452                                                                     | binary-mirror-995452   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | addons-109866                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | addons-109866                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-109866 --wait=true                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-109866 ip                                                                            | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	| addons  | addons-109866 addons disable                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	|         | -p addons-109866                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-109866 ssh cat                                                                       | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	|         | /opt/local-path-provisioner/pvc-28835a80-bbb1-42b9-a246-925c8b10c615_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-109866 addons disable                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109866 addons                                                                        | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109866 addons                                                                        | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC |                     |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:51 UTC | 11 Mar 24 12:51 UTC |
	|         | addons-109866                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:51 UTC | 11 Mar 24 12:51 UTC |
	|         | -p addons-109866                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:51 UTC | 11 Mar 24 12:51 UTC |
	|         | addons-109866                                                                               |                        |         |         |                     |                     |
	| addons  | addons-109866 addons                                                                        | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:51 UTC | 11 Mar 24 12:51 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-109866 ssh curl -s                                                                   | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:51 UTC | 11 Mar 24 12:51 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-109866 ip                                                                            | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:51 UTC | 11 Mar 24 12:51 UTC |
	| addons  | addons-109866 addons disable                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:51 UTC | 11 Mar 24 12:51 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-109866 addons disable                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:51 UTC | 11 Mar 24 12:51 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:47:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:47:30.505625  747291 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:47:30.505772  747291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:30.505783  747291 out.go:304] Setting ErrFile to fd 2...
	I0311 12:47:30.505789  747291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:30.506024  747291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 12:47:30.506486  747291 out.go:298] Setting JSON to false
	I0311 12:47:30.507360  747291 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16195,"bootTime":1710145056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 12:47:30.507434  747291 start.go:139] virtualization:  
	I0311 12:47:30.511718  747291 out.go:177] * [addons-109866] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:47:30.514433  747291 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 12:47:30.516202  747291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:47:30.514447  747291 notify.go:220] Checking for updates...
	I0311 12:47:30.518284  747291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:47:30.520517  747291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 12:47:30.522463  747291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 12:47:30.524331  747291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 12:47:30.526767  747291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:47:30.547159  747291 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:47:30.547281  747291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:30.619042  747291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:47:30.609975253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:30.619154  747291 docker.go:295] overlay module found
	I0311 12:47:30.621840  747291 out.go:177] * Using the docker driver based on user configuration
	I0311 12:47:30.623441  747291 start.go:297] selected driver: docker
	I0311 12:47:30.623458  747291 start.go:901] validating driver "docker" against <nil>
	I0311 12:47:30.623471  747291 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 12:47:30.624094  747291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:30.677742  747291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:47:30.668868648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:30.677920  747291 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:47:30.678161  747291 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 12:47:30.680581  747291 out.go:177] * Using Docker driver with root privileges
	I0311 12:47:30.682818  747291 cni.go:84] Creating CNI manager for ""
	I0311 12:47:30.682841  747291 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:47:30.682853  747291 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:47:30.682935  747291 start.go:340] cluster config:
	{Name:addons-109866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:47:30.685301  747291 out.go:177] * Starting "addons-109866" primary control-plane node in "addons-109866" cluster
	I0311 12:47:30.687504  747291 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 12:47:30.689781  747291 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:47:30.691961  747291 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 12:47:30.692025  747291 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0311 12:47:30.692038  747291 cache.go:56] Caching tarball of preloaded images
	I0311 12:47:30.692051  747291 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:47:30.692122  747291 preload.go:173] Found /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 12:47:30.692132  747291 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0311 12:47:30.692500  747291 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/config.json ...
	I0311 12:47:30.692566  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/config.json: {Name:mk0a8adc75169f20147b340b95375672a0f5ea0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:47:30.707150  747291 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:47:30.707274  747291 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:47:30.707304  747291 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 12:47:30.707323  747291 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 12:47:30.707338  747291 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 12:47:30.707344  747291 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0311 12:47:46.860177  747291 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0311 12:47:46.860220  747291 cache.go:194] Successfully downloaded all kic artifacts
	I0311 12:47:46.860251  747291 start.go:360] acquireMachinesLock for addons-109866: {Name:mkdf0c11320566f0571b3fb5c40daf88466f431d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 12:47:46.861162  747291 start.go:364] duration metric: took 886.928µs to acquireMachinesLock for "addons-109866"
	I0311 12:47:46.861208  747291 start.go:93] Provisioning new machine with config: &{Name:addons-109866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0311 12:47:46.861307  747291 start.go:125] createHost starting for "" (driver="docker")
	I0311 12:47:46.863453  747291 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0311 12:47:46.863711  747291 start.go:159] libmachine.API.Create for "addons-109866" (driver="docker")
	I0311 12:47:46.863754  747291 client.go:168] LocalClient.Create starting
	I0311 12:47:46.863880  747291 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem
	I0311 12:47:47.012226  747291 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem
	I0311 12:47:47.789569  747291 cli_runner.go:164] Run: docker network inspect addons-109866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0311 12:47:47.807956  747291 cli_runner.go:211] docker network inspect addons-109866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0311 12:47:47.808053  747291 network_create.go:281] running [docker network inspect addons-109866] to gather additional debugging logs...
	I0311 12:47:47.808078  747291 cli_runner.go:164] Run: docker network inspect addons-109866
	W0311 12:47:47.823516  747291 cli_runner.go:211] docker network inspect addons-109866 returned with exit code 1
	I0311 12:47:47.823549  747291 network_create.go:284] error running [docker network inspect addons-109866]: docker network inspect addons-109866: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-109866 not found
	I0311 12:47:47.823576  747291 network_create.go:286] output of [docker network inspect addons-109866]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-109866 not found
	
	** /stderr **
	I0311 12:47:47.823688  747291 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 12:47:47.839307  747291 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40026949c0}
	I0311 12:47:47.839348  747291 network_create.go:124] attempt to create docker network addons-109866 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0311 12:47:47.839412  747291 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-109866 addons-109866
	I0311 12:47:47.908199  747291 network_create.go:108] docker network addons-109866 192.168.49.0/24 created
	I0311 12:47:47.908233  747291 kic.go:121] calculated static IP "192.168.49.2" for the "addons-109866" container
	I0311 12:47:47.908305  747291 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0311 12:47:47.921970  747291 cli_runner.go:164] Run: docker volume create addons-109866 --label name.minikube.sigs.k8s.io=addons-109866 --label created_by.minikube.sigs.k8s.io=true
	I0311 12:47:47.938316  747291 oci.go:103] Successfully created a docker volume addons-109866
	I0311 12:47:47.938413  747291 cli_runner.go:164] Run: docker run --rm --name addons-109866-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109866 --entrypoint /usr/bin/test -v addons-109866:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0311 12:47:49.793841  747291 cli_runner.go:217] Completed: docker run --rm --name addons-109866-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109866 --entrypoint /usr/bin/test -v addons-109866:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (1.855384667s)
	I0311 12:47:49.793875  747291 oci.go:107] Successfully prepared a docker volume addons-109866
	I0311 12:47:49.793909  747291 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 12:47:49.793931  747291 kic.go:194] Starting extracting preloaded images to volume ...
	I0311 12:47:49.794018  747291 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-109866:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0311 12:47:53.975614  747291 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-109866:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.181547685s)
	I0311 12:47:53.975646  747291 kic.go:203] duration metric: took 4.181711172s to extract preloaded images to volume ...
	W0311 12:47:53.975796  747291 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0311 12:47:53.975920  747291 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0311 12:47:54.040715  747291 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-109866 --name addons-109866 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109866 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-109866 --network addons-109866 --ip 192.168.49.2 --volume addons-109866:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0311 12:47:54.376115  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Running}}
	I0311 12:47:54.402847  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:47:54.426091  747291 cli_runner.go:164] Run: docker exec addons-109866 stat /var/lib/dpkg/alternatives/iptables
	I0311 12:47:54.494091  747291 oci.go:144] the created container "addons-109866" has a running status.
	I0311 12:47:54.494118  747291 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa...
	I0311 12:47:55.140189  747291 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0311 12:47:55.168592  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:47:55.205259  747291 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0311 12:47:55.205279  747291 kic_runner.go:114] Args: [docker exec --privileged addons-109866 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0311 12:47:55.273314  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:47:55.295900  747291 machine.go:94] provisionDockerMachine start ...
	I0311 12:47:55.296053  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:55.318748  747291 main.go:141] libmachine: Using SSH client type: native
	I0311 12:47:55.319025  747291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0311 12:47:55.319041  747291 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 12:47:55.456536  747291 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-109866
	
	I0311 12:47:55.456564  747291 ubuntu.go:169] provisioning hostname "addons-109866"
	I0311 12:47:55.456628  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:55.477723  747291 main.go:141] libmachine: Using SSH client type: native
	I0311 12:47:55.477976  747291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0311 12:47:55.477992  747291 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-109866 && echo "addons-109866" | sudo tee /etc/hostname
	I0311 12:47:55.621233  747291 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-109866
	
	I0311 12:47:55.621415  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:55.637651  747291 main.go:141] libmachine: Using SSH client type: native
	I0311 12:47:55.637906  747291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0311 12:47:55.637929  747291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-109866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-109866/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-109866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 12:47:55.764686  747291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 12:47:55.764720  747291 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18350-741028/.minikube CaCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18350-741028/.minikube}
	I0311 12:47:55.764772  747291 ubuntu.go:177] setting up certificates
	I0311 12:47:55.764782  747291 provision.go:84] configureAuth start
	I0311 12:47:55.764847  747291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109866
	I0311 12:47:55.781077  747291 provision.go:143] copyHostCerts
	I0311 12:47:55.781157  747291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/ca.pem (1078 bytes)
	I0311 12:47:55.781292  747291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/cert.pem (1123 bytes)
	I0311 12:47:55.781403  747291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/key.pem (1675 bytes)
	I0311 12:47:55.781465  747291 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem org=jenkins.addons-109866 san=[127.0.0.1 192.168.49.2 addons-109866 localhost minikube]
	I0311 12:47:57.414534  747291 provision.go:177] copyRemoteCerts
	I0311 12:47:57.414616  747291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 12:47:57.414660  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.434357  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:57.529828  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 12:47:57.554345  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 12:47:57.577870  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0311 12:47:57.601233  747291 provision.go:87] duration metric: took 1.836423002s to configureAuth
	I0311 12:47:57.601264  747291 ubuntu.go:193] setting minikube options for container-runtime
	I0311 12:47:57.601458  747291 config.go:182] Loaded profile config "addons-109866": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:47:57.601472  747291 machine.go:97] duration metric: took 2.305513654s to provisionDockerMachine
	I0311 12:47:57.601479  747291 client.go:171] duration metric: took 10.737714993s to LocalClient.Create
	I0311 12:47:57.601504  747291 start.go:167] duration metric: took 10.737789733s to libmachine.API.Create "addons-109866"
	I0311 12:47:57.601517  747291 start.go:293] postStartSetup for "addons-109866" (driver="docker")
	I0311 12:47:57.601527  747291 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 12:47:57.601583  747291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 12:47:57.601626  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.617023  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:57.713972  747291 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 12:47:57.716973  747291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0311 12:47:57.717013  747291 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0311 12:47:57.717025  747291 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0311 12:47:57.717032  747291 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0311 12:47:57.717042  747291 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-741028/.minikube/addons for local assets ...
	I0311 12:47:57.717103  747291 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-741028/.minikube/files for local assets ...
	I0311 12:47:57.717131  747291 start.go:296] duration metric: took 115.608999ms for postStartSetup
	I0311 12:47:57.717437  747291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109866
	I0311 12:47:57.732379  747291 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/config.json ...
	I0311 12:47:57.732667  747291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 12:47:57.732721  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.748243  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:57.837649  747291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0311 12:47:57.842110  747291 start.go:128] duration metric: took 10.980788007s to createHost
	I0311 12:47:57.842135  747291 start.go:83] releasing machines lock for "addons-109866", held for 10.980951707s
	I0311 12:47:57.842206  747291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109866
	I0311 12:47:57.860240  747291 ssh_runner.go:195] Run: cat /version.json
	I0311 12:47:57.860259  747291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 12:47:57.860293  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.860329  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.876740  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:57.880925  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:58.079192  747291 ssh_runner.go:195] Run: systemctl --version
	I0311 12:47:58.083740  747291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 12:47:58.088212  747291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0311 12:47:58.113913  747291 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0311 12:47:58.113991  747291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 12:47:58.144010  747291 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0311 12:47:58.144035  747291 start.go:494] detecting cgroup driver to use...
	I0311 12:47:58.144067  747291 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0311 12:47:58.144121  747291 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 12:47:58.159780  747291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 12:47:58.171356  747291 docker.go:217] disabling cri-docker service (if available) ...
	I0311 12:47:58.171432  747291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 12:47:58.185491  747291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 12:47:58.200151  747291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 12:47:58.294349  747291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 12:47:58.384987  747291 docker.go:233] disabling docker service ...
	I0311 12:47:58.385058  747291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 12:47:58.405114  747291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 12:47:58.417827  747291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 12:47:58.499644  747291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 12:47:58.588300  747291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 12:47:58.600088  747291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 12:47:58.617729  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0311 12:47:58.628239  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 12:47:58.638765  747291 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 12:47:58.638842  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 12:47:58.649346  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 12:47:58.659167  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 12:47:58.669173  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 12:47:58.678887  747291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 12:47:58.688233  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 12:47:58.698192  747291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 12:47:58.706717  747291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 12:47:58.714903  747291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:47:58.797055  747291 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 12:47:58.923652  747291 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0311 12:47:58.923806  747291 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0311 12:47:58.927468  747291 start.go:562] Will wait 60s for crictl version
	I0311 12:47:58.927571  747291 ssh_runner.go:195] Run: which crictl
	I0311 12:47:58.930877  747291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 12:47:58.968628  747291 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0311 12:47:58.968783  747291 ssh_runner.go:195] Run: containerd --version
	I0311 12:47:58.991023  747291 ssh_runner.go:195] Run: containerd --version
	I0311 12:47:59.016308  747291 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0311 12:47:59.018512  747291 cli_runner.go:164] Run: docker network inspect addons-109866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 12:47:59.033644  747291 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0311 12:47:59.037292  747291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 12:47:59.048551  747291 kubeadm.go:877] updating cluster {Name:addons-109866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 12:47:59.048686  747291 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 12:47:59.048781  747291 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 12:47:59.089461  747291 containerd.go:612] all images are preloaded for containerd runtime.
	I0311 12:47:59.089486  747291 containerd.go:519] Images already preloaded, skipping extraction
	I0311 12:47:59.089563  747291 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 12:47:59.133163  747291 containerd.go:612] all images are preloaded for containerd runtime.
	I0311 12:47:59.133187  747291 cache_images.go:84] Images are preloaded, skipping loading
	I0311 12:47:59.133196  747291 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0311 12:47:59.133305  747291 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-109866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 12:47:59.133381  747291 ssh_runner.go:195] Run: sudo crictl info
	I0311 12:47:59.171920  747291 cni.go:84] Creating CNI manager for ""
	I0311 12:47:59.171942  747291 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:47:59.171952  747291 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 12:47:59.171996  747291 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-109866 NodeName:addons-109866 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 12:47:59.172154  747291 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-109866"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 12:47:59.172243  747291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 12:47:59.181191  747291 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 12:47:59.181268  747291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 12:47:59.190102  747291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0311 12:47:59.209291  747291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 12:47:59.228198  747291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0311 12:47:59.245976  747291 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0311 12:47:59.249326  747291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 12:47:59.260383  747291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:47:59.339134  747291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 12:47:59.355705  747291 certs.go:68] Setting up /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866 for IP: 192.168.49.2
	I0311 12:47:59.355739  747291 certs.go:194] generating shared ca certs ...
	I0311 12:47:59.355765  747291 certs.go:226] acquiring lock for ca certs: {Name:mk7162cd9946a461c84d2f2cea8ea4b87fd89373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:47:59.356526  747291 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key
	I0311 12:48:00.155957  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt ...
	I0311 12:48:00.156047  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt: {Name:mk744f20428760534dc1f0336237227fcabf7e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:00.157160  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key ...
	I0311 12:48:00.157197  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key: {Name:mkf26fd7f704dd60e3b2ddf58fe11aa885997f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:00.157310  747291 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key
	I0311 12:48:00.630864  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.crt ...
	I0311 12:48:00.630899  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.crt: {Name:mkbd4d73ba5b09247f7c9e4c991c1710cde5a749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:00.631656  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key ...
	I0311 12:48:00.631676  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key: {Name:mk75f2fa80c73336760282c57396731158542a7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:00.631778  747291 certs.go:256] generating profile certs ...
	I0311 12:48:00.631842  747291 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.key
	I0311 12:48:00.631861  747291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt with IP's: []
	I0311 12:48:01.305584  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt ...
	I0311 12:48:01.305616  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: {Name:mk722faa99f631f5601c07375460df3ca3f77ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:01.306267  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.key ...
	I0311 12:48:01.306285  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.key: {Name:mkc56fa78cee61ce4570887853684dfd4d7779d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:01.306897  747291 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key.aa9b1a7a
	I0311 12:48:01.306920  747291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt.aa9b1a7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0311 12:48:01.940123  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt.aa9b1a7a ...
	I0311 12:48:01.940154  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt.aa9b1a7a: {Name:mk74b6dc1ab00e4a43e06868d843c31717321777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:01.940903  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key.aa9b1a7a ...
	I0311 12:48:01.940923  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key.aa9b1a7a: {Name:mkcf67507182a282c6efc2bf09d1da75223cfdd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:01.941026  747291 certs.go:381] copying /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt.aa9b1a7a -> /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt
	I0311 12:48:01.941113  747291 certs.go:385] copying /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key.aa9b1a7a -> /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key
	I0311 12:48:01.941169  747291 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.key
	I0311 12:48:01.941192  747291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.crt with IP's: []
	I0311 12:48:02.383863  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.crt ...
	I0311 12:48:02.383896  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.crt: {Name:mkb1568d40c68b54a58602a4529a275b6bc990dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:02.384719  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.key ...
	I0311 12:48:02.384739  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.key: {Name:mka38203c5ed6bd642af86623754e20813388646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:02.385834  747291 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 12:48:02.385880  747291 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem (1078 bytes)
	I0311 12:48:02.385914  747291 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem (1123 bytes)
	I0311 12:48:02.385941  747291 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem (1675 bytes)
	I0311 12:48:02.386592  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 12:48:02.411584  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0311 12:48:02.435836  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 12:48:02.460093  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0311 12:48:02.484169  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0311 12:48:02.509101  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 12:48:02.533590  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 12:48:02.558587  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 12:48:02.582364  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 12:48:02.606448  747291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 12:48:02.624283  747291 ssh_runner.go:195] Run: openssl version
	I0311 12:48:02.629893  747291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 12:48:02.639445  747291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:48:02.642951  747291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 12:48 /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:48:02.643040  747291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:48:02.650196  747291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 12:48:02.660112  747291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 12:48:02.664155  747291 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 12:48:02.664211  747291 kubeadm.go:391] StartCluster: {Name:addons-109866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:48:02.664303  747291 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0311 12:48:02.664360  747291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 12:48:02.722093  747291 cri.go:89] found id: ""
	I0311 12:48:02.722163  747291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 12:48:02.732440  747291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 12:48:02.741880  747291 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0311 12:48:02.741979  747291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 12:48:02.752537  747291 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 12:48:02.752559  747291 kubeadm.go:156] found existing configuration files:
	
	I0311 12:48:02.752626  747291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 12:48:02.761153  747291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 12:48:02.761218  747291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 12:48:02.769438  747291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 12:48:02.778331  747291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 12:48:02.778402  747291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 12:48:02.786823  747291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 12:48:02.795413  747291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 12:48:02.795526  747291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 12:48:02.803890  747291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 12:48:02.812967  747291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 12:48:02.813056  747291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 12:48:02.821426  747291 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0311 12:48:02.864396  747291 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 12:48:02.864452  747291 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 12:48:02.902708  747291 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0311 12:48:02.902780  747291 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0311 12:48:02.902815  747291 kubeadm.go:309] OS: Linux
	I0311 12:48:02.902859  747291 kubeadm.go:309] CGROUPS_CPU: enabled
	I0311 12:48:02.902906  747291 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0311 12:48:02.902951  747291 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0311 12:48:02.902997  747291 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0311 12:48:02.903043  747291 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0311 12:48:02.903089  747291 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0311 12:48:02.903143  747291 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0311 12:48:02.903189  747291 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0311 12:48:02.903234  747291 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0311 12:48:02.975655  747291 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 12:48:02.975768  747291 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 12:48:02.975859  747291 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 12:48:03.214949  747291 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 12:48:03.218249  747291 out.go:204]   - Generating certificates and keys ...
	I0311 12:48:03.218412  747291 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 12:48:03.218512  747291 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 12:48:03.401767  747291 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 12:48:03.803591  747291 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 12:48:04.578095  747291 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 12:48:05.044240  747291 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 12:48:05.408934  747291 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 12:48:05.409112  747291 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-109866 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0311 12:48:05.793941  747291 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 12:48:05.794323  747291 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-109866 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0311 12:48:06.413978  747291 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 12:48:07.440071  747291 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 12:48:07.859558  747291 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 12:48:07.859920  747291 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 12:48:08.202034  747291 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 12:48:08.458612  747291 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 12:48:08.627783  747291 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 12:48:08.816841  747291 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 12:48:08.817418  747291 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 12:48:08.820031  747291 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 12:48:08.822668  747291 out.go:204]   - Booting up control plane ...
	I0311 12:48:08.822769  747291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 12:48:08.822847  747291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 12:48:08.824396  747291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 12:48:08.836220  747291 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 12:48:08.837095  747291 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 12:48:08.837354  747291 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 12:48:08.930549  747291 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 12:48:18.439209  747291 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.508723 seconds
	I0311 12:48:18.439339  747291 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 12:48:18.455990  747291 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 12:48:18.983466  747291 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 12:48:18.983696  747291 kubeadm.go:309] [mark-control-plane] Marking the node addons-109866 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 12:48:19.511424  747291 kubeadm.go:309] [bootstrap-token] Using token: bkf7y5.hnryfcxu9keivkxu
	I0311 12:48:19.513519  747291 out.go:204]   - Configuring RBAC rules ...
	I0311 12:48:19.513654  747291 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 12:48:19.533924  747291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 12:48:19.543892  747291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 12:48:19.548320  747291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 12:48:19.552141  747291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 12:48:19.557717  747291 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 12:48:19.570670  747291 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 12:48:19.813965  747291 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 12:48:19.939683  747291 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 12:48:19.941532  747291 kubeadm.go:309] 
	I0311 12:48:19.941618  747291 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 12:48:19.941627  747291 kubeadm.go:309] 
	I0311 12:48:19.941719  747291 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 12:48:19.941733  747291 kubeadm.go:309] 
	I0311 12:48:19.941760  747291 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 12:48:19.941821  747291 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 12:48:19.941874  747291 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 12:48:19.941883  747291 kubeadm.go:309] 
	I0311 12:48:19.941935  747291 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 12:48:19.941943  747291 kubeadm.go:309] 
	I0311 12:48:19.941989  747291 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 12:48:19.941998  747291 kubeadm.go:309] 
	I0311 12:48:19.942048  747291 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 12:48:19.942125  747291 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 12:48:19.942194  747291 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 12:48:19.942203  747291 kubeadm.go:309] 
	I0311 12:48:19.942300  747291 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 12:48:19.942379  747291 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 12:48:19.942387  747291 kubeadm.go:309] 
	I0311 12:48:19.942467  747291 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bkf7y5.hnryfcxu9keivkxu \
	I0311 12:48:19.942570  747291 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8388c3333519d9f29bb1cc52e18797f4b748e4ad292cdfee8cd4632271dbee8 \
	I0311 12:48:19.942593  747291 kubeadm.go:309] 	--control-plane 
	I0311 12:48:19.942600  747291 kubeadm.go:309] 
	I0311 12:48:19.942681  747291 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 12:48:19.942691  747291 kubeadm.go:309] 
	I0311 12:48:19.942770  747291 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bkf7y5.hnryfcxu9keivkxu \
	I0311 12:48:19.942872  747291 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8388c3333519d9f29bb1cc52e18797f4b748e4ad292cdfee8cd4632271dbee8 
	I0311 12:48:19.946506  747291 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0311 12:48:19.946626  747291 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 12:48:19.946650  747291 cni.go:84] Creating CNI manager for ""
	I0311 12:48:19.946658  747291 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:48:19.950384  747291 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0311 12:48:19.952261  747291 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0311 12:48:19.956904  747291 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0311 12:48:19.956928  747291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0311 12:48:19.995378  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0311 12:48:21.017387  747291 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.021970206s)
	I0311 12:48:21.017427  747291 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 12:48:21.017555  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:21.017634  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-109866 minikube.k8s.io/updated_at=2024_03_11T12_48_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563 minikube.k8s.io/name=addons-109866 minikube.k8s.io/primary=true
	I0311 12:48:21.215139  747291 ops.go:34] apiserver oom_adj: -16
	I0311 12:48:21.215237  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:21.716361  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:22.215948  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:22.716302  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:23.216335  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:23.715451  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:24.215904  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:24.716085  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:25.215989  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:25.715373  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:26.215853  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:26.715517  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:27.215371  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:27.715366  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:28.215657  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:28.715943  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:29.216239  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:29.715295  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:30.215350  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:30.716155  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:31.215386  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:31.715445  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:32.216159  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:32.324363  747291 kubeadm.go:1106] duration metric: took 11.306857958s to wait for elevateKubeSystemPrivileges
	W0311 12:48:32.324409  747291 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 12:48:32.324416  747291 kubeadm.go:393] duration metric: took 29.660209346s to StartCluster
	I0311 12:48:32.324433  747291 settings.go:142] acquiring lock: {Name:mk647fd5a11531f437bba0a4615b0b34bf87ec0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:32.324569  747291 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:48:32.325022  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/kubeconfig: {Name:mkea9792df2a23b99e9686253371e8a16054b02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:32.325859  747291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 12:48:32.325892  747291 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0311 12:48:32.328431  747291 out.go:177] * Verifying Kubernetes components...
	I0311 12:48:32.326164  747291 config.go:182] Loaded profile config "addons-109866": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:48:32.326175  747291 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0311 12:48:32.330538  747291 addons.go:69] Setting yakd=true in profile "addons-109866"
	I0311 12:48:32.330576  747291 addons.go:234] Setting addon yakd=true in "addons-109866"
	I0311 12:48:32.330618  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.331145  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.331296  747291 addons.go:69] Setting ingress-dns=true in profile "addons-109866"
	I0311 12:48:32.331324  747291 addons.go:234] Setting addon ingress-dns=true in "addons-109866"
	I0311 12:48:32.331361  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.331774  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.332063  747291 addons.go:69] Setting inspektor-gadget=true in profile "addons-109866"
	I0311 12:48:32.332096  747291 addons.go:234] Setting addon inspektor-gadget=true in "addons-109866"
	I0311 12:48:32.332130  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.332528  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.332706  747291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:48:32.332984  747291 addons.go:69] Setting cloud-spanner=true in profile "addons-109866"
	I0311 12:48:32.333018  747291 addons.go:234] Setting addon cloud-spanner=true in "addons-109866"
	I0311 12:48:32.333040  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.333432  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.335301  747291 addons.go:69] Setting metrics-server=true in profile "addons-109866"
	I0311 12:48:32.335342  747291 addons.go:234] Setting addon metrics-server=true in "addons-109866"
	I0311 12:48:32.335381  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.335796  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.336277  747291 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-109866"
	I0311 12:48:32.336341  747291 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-109866"
	I0311 12:48:32.336368  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.337250  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.342912  747291 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-109866"
	I0311 12:48:32.342955  747291 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-109866"
	I0311 12:48:32.342997  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.343501  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.346872  747291 addons.go:69] Setting default-storageclass=true in profile "addons-109866"
	I0311 12:48:32.346925  747291 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-109866"
	I0311 12:48:32.347280  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.353979  747291 addons.go:69] Setting registry=true in profile "addons-109866"
	I0311 12:48:32.354023  747291 addons.go:234] Setting addon registry=true in "addons-109866"
	I0311 12:48:32.354061  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.354514  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.367336  747291 addons.go:69] Setting gcp-auth=true in profile "addons-109866"
	I0311 12:48:32.367390  747291 mustload.go:65] Loading cluster: addons-109866
	I0311 12:48:32.367582  747291 config.go:182] Loaded profile config "addons-109866": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:48:32.367836  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.373150  747291 addons.go:69] Setting storage-provisioner=true in profile "addons-109866"
	I0311 12:48:32.373208  747291 addons.go:234] Setting addon storage-provisioner=true in "addons-109866"
	I0311 12:48:32.373248  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.373778  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.395899  747291 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-109866"
	I0311 12:48:32.395945  747291 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-109866"
	I0311 12:48:32.396361  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.400578  747291 addons.go:69] Setting ingress=true in profile "addons-109866"
	I0311 12:48:32.400639  747291 addons.go:234] Setting addon ingress=true in "addons-109866"
	I0311 12:48:32.400687  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.401221  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.410758  747291 addons.go:69] Setting volumesnapshots=true in profile "addons-109866"
	I0311 12:48:32.410811  747291 addons.go:234] Setting addon volumesnapshots=true in "addons-109866"
	I0311 12:48:32.410850  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.411318  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.574560  747291 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0311 12:48:32.576318  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.591791  747291 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 12:48:32.592031  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0311 12:48:32.599515  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.612816  747291 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 12:48:32.623879  747291 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 12:48:32.624189  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 12:48:32.624314  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.629345  747291 addons.go:234] Setting addon default-storageclass=true in "addons-109866"
	I0311 12:48:32.629404  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.629921  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.669931  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0311 12:48:32.671941  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0311 12:48:32.673915  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0311 12:48:32.676147  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0311 12:48:32.682154  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0311 12:48:32.603674  747291 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0311 12:48:32.603682  747291 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0311 12:48:32.603686  747291 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0311 12:48:32.603695  747291 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0311 12:48:32.603705  747291 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0311 12:48:32.664349  747291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 12:48:32.708960  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0311 12:48:32.710962  747291 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 12:48:32.710983  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 12:48:32.711071  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.712923  747291 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0311 12:48:32.712944  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0311 12:48:32.713014  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.715560  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0311 12:48:32.713862  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0311 12:48:32.713897  747291 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0311 12:48:32.713969  747291 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 12:48:32.713985  747291 out.go:177]   - Using image docker.io/registry:2.8.3
	I0311 12:48:32.723170  747291 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0311 12:48:32.720889  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0311 12:48:32.720972  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0311 12:48:32.720979  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0311 12:48:32.723695  747291 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-109866"
	I0311 12:48:32.727476  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.727660  747291 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0311 12:48:32.729340  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0311 12:48:32.729427  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.732302  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.746652  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.747256  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.751280  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0311 12:48:32.759438  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0311 12:48:32.759532  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0311 12:48:32.759668  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.775980  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.821719  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0311 12:48:32.821681  747291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 12:48:32.849077  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0311 12:48:32.849512  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0311 12:48:32.850448  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.852909  747291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:48:32.870632  747291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:48:32.890072  747291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0311 12:48:32.898984  747291 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 12:48:32.899058  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0311 12:48:32.899142  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.903916  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:32.904027  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:32.962476  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:32.967074  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.014625  747291 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 12:48:33.014649  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 12:48:33.014715  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:33.019190  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.025456  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.073489  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.089506  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.094389  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.095139  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.101486  747291 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0311 12:48:33.096207  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.111245  747291 out.go:177]   - Using image docker.io/busybox:stable
	I0311 12:48:33.110755  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.113733  747291 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 12:48:33.113751  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0311 12:48:33.113817  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:33.144322  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	W0311 12:48:33.156257  747291 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0311 12:48:33.156288  747291 retry.go:31] will retry after 126.16413ms: ssh: handshake failed: EOF
	W0311 12:48:33.284234  747291 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0311 12:48:33.284259  747291 retry.go:31] will retry after 259.19574ms: ssh: handshake failed: EOF
	I0311 12:48:33.608906  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0311 12:48:33.755652  747291 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0311 12:48:33.755726  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0311 12:48:33.775683  747291 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0311 12:48:33.775715  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0311 12:48:33.780725  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 12:48:33.790831  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 12:48:33.793780  747291 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0311 12:48:33.793809  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0311 12:48:33.798299  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 12:48:33.930534  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 12:48:33.934452  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 12:48:33.961500  747291 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 12:48:33.961525  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0311 12:48:33.971594  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0311 12:48:33.971620  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0311 12:48:33.975941  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 12:48:34.053897  747291 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0311 12:48:34.053923  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0311 12:48:34.056803  747291 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0311 12:48:34.056830  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0311 12:48:34.142714  747291 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0311 12:48:34.142783  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0311 12:48:34.205077  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0311 12:48:34.205150  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0311 12:48:34.309497  747291 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 12:48:34.309574  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 12:48:34.320723  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0311 12:48:34.357291  747291 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0311 12:48:34.357372  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0311 12:48:34.448566  747291 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 12:48:34.448641  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 12:48:34.476888  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0311 12:48:34.476972  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0311 12:48:34.489502  747291 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0311 12:48:34.489577  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0311 12:48:34.508390  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0311 12:48:34.508479  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0311 12:48:34.524709  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 12:48:34.606091  747291 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0311 12:48:34.606162  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0311 12:48:34.607862  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0311 12:48:34.607923  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0311 12:48:34.642335  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0311 12:48:34.642408  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0311 12:48:34.761186  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0311 12:48:34.761258  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0311 12:48:34.785354  747291 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0311 12:48:34.785425  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0311 12:48:34.868285  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0311 12:48:34.868358  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0311 12:48:35.077973  747291 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:48:35.078057  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0311 12:48:35.122914  747291 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0311 12:48:35.122988  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0311 12:48:35.173991  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0311 12:48:35.174069  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0311 12:48:35.241515  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0311 12:48:35.241586  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0311 12:48:35.463337  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:48:35.486680  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0311 12:48:35.486751  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0311 12:48:35.576256  747291 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.735244891s)
	I0311 12:48:35.576732  747291 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.892304998s)
	I0311 12:48:35.576808  747291 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0311 12:48:35.578063  747291 node_ready.go:35] waiting up to 6m0s for node "addons-109866" to be "Ready" ...
	I0311 12:48:35.582997  747291 node_ready.go:49] node "addons-109866" has status "Ready":"True"
	I0311 12:48:35.583018  747291 node_ready.go:38] duration metric: took 4.695372ms for node "addons-109866" to be "Ready" ...
	I0311 12:48:35.583028  747291 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 12:48:35.585729  747291 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 12:48:35.585794  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0311 12:48:35.605373  747291 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:35.607408  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0311 12:48:35.897587  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 12:48:35.899501  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0311 12:48:35.899563  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0311 12:48:36.071543  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0311 12:48:36.071615  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0311 12:48:36.082160  747291 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-109866" context rescaled to 1 replicas
	I0311 12:48:36.317912  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0311 12:48:36.317976  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0311 12:48:36.349613  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 12:48:36.349677  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0311 12:48:36.526301  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 12:48:36.952352  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.343354144s)
	I0311 12:48:36.952468  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.171652905s)
	I0311 12:48:37.612769  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:38.134176  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.343307986s)
	I0311 12:48:39.416236  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.61790455s)
	I0311 12:48:39.416430  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.4858686s)
	I0311 12:48:39.416473  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.48199751s)
	I0311 12:48:39.457975  747291 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0311 12:48:39.458068  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:39.518024  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	W0311 12:48:39.540145  747291 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0311 12:48:39.678310  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:39.954324  747291 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0311 12:48:40.039207  747291 addons.go:234] Setting addon gcp-auth=true in "addons-109866"
	I0311 12:48:40.039287  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:40.039880  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:40.077160  747291 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0311 12:48:40.077223  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:40.116875  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:41.409911  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.433924051s)
	I0311 12:48:41.409945  747291 addons.go:470] Verifying addon ingress=true in "addons-109866"
	I0311 12:48:41.412194  747291 out.go:177] * Verifying ingress addon...
	I0311 12:48:41.410164  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.089356018s)
	I0311 12:48:41.410257  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.885399668s)
	I0311 12:48:41.410337  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.946929106s)
	I0311 12:48:41.410374  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.802900376s)
	I0311 12:48:41.410416  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.512803596s)
	I0311 12:48:41.414973  747291 addons.go:470] Verifying addon registry=true in "addons-109866"
	I0311 12:48:41.417649  747291 out.go:177] * Verifying registry addon...
	I0311 12:48:41.415702  747291 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0311 12:48:41.415734  747291 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 12:48:41.415747  747291 addons.go:470] Verifying addon metrics-server=true in "addons-109866"
	I0311 12:48:41.421663  747291 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-109866 service yakd-dashboard -n yakd-dashboard
	
	I0311 12:48:41.420341  747291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0311 12:48:41.420370  747291 retry.go:31] will retry after 260.813213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 12:48:41.425591  747291 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0311 12:48:41.425615  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:41.429372  747291 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0311 12:48:41.429437  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:41.684641  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:48:41.953871  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:41.954543  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:42.135794  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:42.431138  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:42.431409  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:42.922677  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.396279866s)
	I0311 12:48:42.922712  747291 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-109866"
	I0311 12:48:42.928165  747291 out.go:177] * Verifying csi-hostpath-driver addon...
	I0311 12:48:42.922897  747291 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.845712958s)
	I0311 12:48:42.934051  747291 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0311 12:48:42.931260  747291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0311 12:48:42.931834  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:42.938127  747291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:48:42.936898  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:42.940024  747291 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0311 12:48:42.940044  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0311 12:48:42.954435  747291 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0311 12:48:42.954462  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:43.042557  747291 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0311 12:48:43.042586  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0311 12:48:43.080276  747291 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 12:48:43.080340  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0311 12:48:43.102723  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 12:48:43.425736  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:43.430177  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:43.442330  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:43.764356  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.079619149s)
	I0311 12:48:43.926740  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:43.930398  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:43.942764  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:44.142266  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:44.199193  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.096390963s)
	I0311 12:48:44.202038  747291 addons.go:470] Verifying addon gcp-auth=true in "addons-109866"
	I0311 12:48:44.204299  747291 out.go:177] * Verifying gcp-auth addon...
	I0311 12:48:44.206854  747291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0311 12:48:44.210463  747291 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0311 12:48:44.210488  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:44.425203  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:44.428879  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:44.442568  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:44.714104  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:44.925306  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:44.928136  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:44.941727  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:45.211464  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:45.425750  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:45.430135  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:45.443105  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:45.712000  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:45.926025  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:45.929433  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:45.941046  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:46.210931  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:46.425580  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:46.430356  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:46.442526  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:46.612118  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:46.712321  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:46.924961  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:46.929074  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:46.942243  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:47.211354  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:47.425348  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:47.429841  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:47.441303  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:47.710328  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:47.930668  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:47.932059  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:47.942310  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:48.211171  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:48.426467  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:48.433317  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:48.482444  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:48.614365  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:48.711540  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:48.926348  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:48.930092  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:48.942445  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:49.211997  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:49.436122  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:49.440786  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:49.450167  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:49.612962  747291 pod_ready.go:92] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.612997  747291 pod_ready.go:81] duration metric: took 14.007577169s for pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.613027  747291 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ttll7" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.615487  747291 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ttll7" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ttll7" not found
	I0311 12:48:49.615547  747291 pod_ready.go:81] duration metric: took 2.506618ms for pod "coredns-5dd5756b68-ttll7" in "kube-system" namespace to be "Ready" ...
	E0311 12:48:49.615573  747291 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ttll7" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ttll7" not found
	I0311 12:48:49.615595  747291 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.622074  747291 pod_ready.go:92] pod "etcd-addons-109866" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.622100  747291 pod_ready.go:81] duration metric: took 6.479796ms for pod "etcd-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.622116  747291 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.628700  747291 pod_ready.go:92] pod "kube-apiserver-addons-109866" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.628723  747291 pod_ready.go:81] duration metric: took 6.570544ms for pod "kube-apiserver-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.628735  747291 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.639539  747291 pod_ready.go:92] pod "kube-controller-manager-addons-109866" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.639610  747291 pod_ready.go:81] duration metric: took 10.867116ms for pod "kube-controller-manager-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.639636  747291 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sbsmh" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.710540  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:49.810553  747291 pod_ready.go:92] pod "kube-proxy-sbsmh" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.810580  747291 pod_ready.go:81] duration metric: took 170.921937ms for pod "kube-proxy-sbsmh" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.810592  747291 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.925502  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:49.928614  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:49.941434  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:50.210766  747291 pod_ready.go:92] pod "kube-scheduler-addons-109866" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:50.210792  747291 pod_ready.go:81] duration metric: took 400.192911ms for pod "kube-scheduler-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:50.210804  747291 pod_ready.go:38] duration metric: took 14.627765173s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 12:48:50.210840  747291 api_server.go:52] waiting for apiserver process to appear ...
	I0311 12:48:50.210934  747291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 12:48:50.213086  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:50.230256  747291 api_server.go:72] duration metric: took 17.904331374s to wait for apiserver process to appear ...
	I0311 12:48:50.230329  747291 api_server.go:88] waiting for apiserver healthz status ...
	I0311 12:48:50.230366  747291 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 12:48:50.239228  747291 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0311 12:48:50.241400  747291 api_server.go:141] control plane version: v1.28.4
	I0311 12:48:50.241431  747291 api_server.go:131] duration metric: took 11.079915ms to wait for apiserver health ...
	I0311 12:48:50.241442  747291 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 12:48:50.431800  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:50.436447  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:50.444200  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:50.449095  747291 system_pods.go:59] 18 kube-system pods found
	I0311 12:48:50.449143  747291 system_pods.go:61] "coredns-5dd5756b68-k6fgr" [e5c98387-4a6b-4a1b-9d84-0ba3de8e1798] Running
	I0311 12:48:50.449154  747291 system_pods.go:61] "csi-hostpath-attacher-0" [6ea56bf9-3a15-4722-aed9-c371a7a41885] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 12:48:50.449163  747291 system_pods.go:61] "csi-hostpath-resizer-0" [7a2ce1e9-7676-47c3-b51c-e771ca974f68] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 12:48:50.449175  747291 system_pods.go:61] "csi-hostpathplugin-ppdhc" [7a7d2e57-ae08-4a57-83cb-84db6e736c72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 12:48:50.449186  747291 system_pods.go:61] "etcd-addons-109866" [235fd778-1509-445c-b8d2-0e5a9a43192c] Running
	I0311 12:48:50.449190  747291 system_pods.go:61] "kindnet-dhnct" [41d1c11b-a3a9-478d-ad84-fe95dbd72f82] Running
	I0311 12:48:50.449194  747291 system_pods.go:61] "kube-apiserver-addons-109866" [fb04e6cf-bccf-4ccf-b7f3-6bf00a27afa8] Running
	I0311 12:48:50.449199  747291 system_pods.go:61] "kube-controller-manager-addons-109866" [3dc4f128-46e8-42ec-8621-799298eaac21] Running
	I0311 12:48:50.449209  747291 system_pods.go:61] "kube-ingress-dns-minikube" [fd805e6a-7c5e-423b-b249-5bf6eae790f1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 12:48:50.449214  747291 system_pods.go:61] "kube-proxy-sbsmh" [f7e8830b-f777-4eb9-bbdb-517eee989dd1] Running
	I0311 12:48:50.449220  747291 system_pods.go:61] "kube-scheduler-addons-109866" [c06db79c-c7d0-4935-a7f9-6642a62fc830] Running
	I0311 12:48:50.449226  747291 system_pods.go:61] "metrics-server-69cf46c98-vx8dw" [e82eba82-b9ef-4607-9492-6a41d0ca5885] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 12:48:50.449233  747291 system_pods.go:61] "nvidia-device-plugin-daemonset-jd445" [6386f2cb-771c-4f32-9490-ef0becc98007] Running
	I0311 12:48:50.449238  747291 system_pods.go:61] "registry-htvdt" [a674420f-29d1-47aa-96b0-e37549d4e224] Running
	I0311 12:48:50.449250  747291 system_pods.go:61] "registry-proxy-t89kt" [b429a937-8de5-46fc-885a-51a33440731e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 12:48:50.449258  747291 system_pods.go:61] "snapshot-controller-58dbcc7b99-glnz7" [d40bb50d-a35a-46fb-8da3-61c5565336d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 12:48:50.449269  747291 system_pods.go:61] "snapshot-controller-58dbcc7b99-m464j" [a4136872-af85-40c4-b509-a24da63d7681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 12:48:50.449274  747291 system_pods.go:61] "storage-provisioner" [2724cbee-338e-432f-953e-5d651d12e62f] Running
	I0311 12:48:50.449282  747291 system_pods.go:74] duration metric: took 207.832641ms to wait for pod list to return data ...
	I0311 12:48:50.449295  747291 default_sa.go:34] waiting for default service account to be created ...
	I0311 12:48:50.610114  747291 default_sa.go:45] found service account: "default"
	I0311 12:48:50.610139  747291 default_sa.go:55] duration metric: took 160.836961ms for default service account to be created ...
	I0311 12:48:50.610149  747291 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 12:48:50.714829  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:50.818582  747291 system_pods.go:86] 18 kube-system pods found
	I0311 12:48:50.818625  747291 system_pods.go:89] "coredns-5dd5756b68-k6fgr" [e5c98387-4a6b-4a1b-9d84-0ba3de8e1798] Running
	I0311 12:48:50.818637  747291 system_pods.go:89] "csi-hostpath-attacher-0" [6ea56bf9-3a15-4722-aed9-c371a7a41885] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 12:48:50.818645  747291 system_pods.go:89] "csi-hostpath-resizer-0" [7a2ce1e9-7676-47c3-b51c-e771ca974f68] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 12:48:50.818654  747291 system_pods.go:89] "csi-hostpathplugin-ppdhc" [7a7d2e57-ae08-4a57-83cb-84db6e736c72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 12:48:50.818660  747291 system_pods.go:89] "etcd-addons-109866" [235fd778-1509-445c-b8d2-0e5a9a43192c] Running
	I0311 12:48:50.818664  747291 system_pods.go:89] "kindnet-dhnct" [41d1c11b-a3a9-478d-ad84-fe95dbd72f82] Running
	I0311 12:48:50.818670  747291 system_pods.go:89] "kube-apiserver-addons-109866" [fb04e6cf-bccf-4ccf-b7f3-6bf00a27afa8] Running
	I0311 12:48:50.818675  747291 system_pods.go:89] "kube-controller-manager-addons-109866" [3dc4f128-46e8-42ec-8621-799298eaac21] Running
	I0311 12:48:50.818683  747291 system_pods.go:89] "kube-ingress-dns-minikube" [fd805e6a-7c5e-423b-b249-5bf6eae790f1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 12:48:50.818687  747291 system_pods.go:89] "kube-proxy-sbsmh" [f7e8830b-f777-4eb9-bbdb-517eee989dd1] Running
	I0311 12:48:50.818697  747291 system_pods.go:89] "kube-scheduler-addons-109866" [c06db79c-c7d0-4935-a7f9-6642a62fc830] Running
	I0311 12:48:50.818703  747291 system_pods.go:89] "metrics-server-69cf46c98-vx8dw" [e82eba82-b9ef-4607-9492-6a41d0ca5885] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 12:48:50.818712  747291 system_pods.go:89] "nvidia-device-plugin-daemonset-jd445" [6386f2cb-771c-4f32-9490-ef0becc98007] Running
	I0311 12:48:50.818717  747291 system_pods.go:89] "registry-htvdt" [a674420f-29d1-47aa-96b0-e37549d4e224] Running
	I0311 12:48:50.818722  747291 system_pods.go:89] "registry-proxy-t89kt" [b429a937-8de5-46fc-885a-51a33440731e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 12:48:50.818729  747291 system_pods.go:89] "snapshot-controller-58dbcc7b99-glnz7" [d40bb50d-a35a-46fb-8da3-61c5565336d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 12:48:50.818735  747291 system_pods.go:89] "snapshot-controller-58dbcc7b99-m464j" [a4136872-af85-40c4-b509-a24da63d7681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 12:48:50.818739  747291 system_pods.go:89] "storage-provisioner" [2724cbee-338e-432f-953e-5d651d12e62f] Running
	I0311 12:48:50.818749  747291 system_pods.go:126] duration metric: took 208.592768ms to wait for k8s-apps to be running ...
	I0311 12:48:50.818757  747291 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 12:48:50.818816  747291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 12:48:50.838658  747291 system_svc.go:56] duration metric: took 19.891216ms WaitForService to wait for kubelet
	I0311 12:48:50.838739  747291 kubeadm.go:576] duration metric: took 18.51281897s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 12:48:50.838867  747291 node_conditions.go:102] verifying NodePressure condition ...
	I0311 12:48:50.925399  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:50.928503  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:50.942755  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:51.010638  747291 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 12:48:51.010673  747291 node_conditions.go:123] node cpu capacity is 2
	I0311 12:48:51.010687  747291 node_conditions.go:105] duration metric: took 171.787794ms to run NodePressure ...
	I0311 12:48:51.010701  747291 start.go:240] waiting for startup goroutines ...
	I0311 12:48:51.211767  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:51.430218  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:51.431246  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:51.442335  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:51.711894  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:51.928498  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:51.931688  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:51.943101  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:52.210728  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:52.425478  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:52.428586  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:52.442244  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:52.711170  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:52.933400  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:52.938854  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:52.953080  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:53.211496  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:53.426270  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:53.443505  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:53.444853  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:53.711091  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:53.925196  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:53.928281  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:53.942155  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:54.211087  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:54.426252  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:54.430618  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:54.442771  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:54.712317  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:54.926077  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:54.929504  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:54.942518  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:55.211147  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:55.425616  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:55.429904  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:55.441880  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:55.712240  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:55.930243  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:55.936378  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:55.944817  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:56.210688  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:56.426663  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:56.429971  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:56.444255  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:56.711349  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:56.925567  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:56.929377  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:56.942434  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:57.211050  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:57.432277  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:57.447798  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:57.450899  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:57.710884  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:57.927579  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:57.930101  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:57.941340  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:58.210591  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:58.426177  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:58.431146  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:58.445636  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:58.711144  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:58.927406  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:58.929970  747291 kapi.go:107] duration metric: took 17.509627272s to wait for kubernetes.io/minikube-addons=registry ...
	I0311 12:48:58.941572  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:59.210384  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:59.425325  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:59.443461  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:59.710858  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:59.925797  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:59.942603  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:00.212227  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:00.425836  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:00.449683  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:00.711619  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:00.926227  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:00.951670  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:01.210816  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:01.427002  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:01.442851  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:01.716473  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:01.931285  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:01.943636  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:02.211424  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:02.425744  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:02.443183  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:02.711144  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:02.924705  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:02.943790  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:03.210323  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:03.425043  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:03.441304  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:03.710901  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:03.925636  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:03.942930  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:04.210390  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:04.427471  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:04.441922  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:04.710538  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:04.925331  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:04.942385  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:05.210659  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:05.425156  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:05.441573  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:05.711793  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:05.933419  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:05.941759  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:06.210908  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:06.426561  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:06.444056  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:06.712882  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:06.926327  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:06.945349  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:07.211543  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:07.431387  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:07.449057  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:07.713557  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:07.925155  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:07.942736  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:08.210723  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:08.425487  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:08.442008  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:08.711732  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:08.925191  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:08.942317  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:09.211015  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:09.425724  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:09.442328  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:09.711445  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:09.925191  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:09.941676  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:10.210673  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:10.425196  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:10.442426  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:10.711470  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:10.925247  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:10.941885  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:11.210815  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:11.425994  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:11.441998  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:11.710273  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:11.926759  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:11.942539  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:12.212841  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:12.427587  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:12.441897  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:12.711367  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:12.926121  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:12.944002  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:13.211229  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:13.425303  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:13.442278  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:13.711294  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:13.924806  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:13.942202  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:14.210885  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:14.425673  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:14.443346  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:14.712025  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:14.925803  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:14.941752  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:15.210681  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:15.425276  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:15.442865  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:15.711106  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:15.925243  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:15.943506  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:16.211746  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:16.425441  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:16.442275  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:16.712464  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:16.925853  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:16.941817  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:17.213616  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:17.431343  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:17.442870  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:17.711515  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:17.926278  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:17.943184  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:18.211711  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:18.425339  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:18.442588  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:18.711538  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:18.925531  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:18.942468  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:19.211926  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:19.425919  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:19.442034  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:19.711228  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:19.925564  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:19.942284  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:20.211225  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:20.425206  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:20.442365  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:20.710821  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:20.925515  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:20.942511  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:21.213176  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:21.425103  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:21.441467  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:21.712106  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:21.934904  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:21.946227  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:22.211688  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:22.426022  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:22.441442  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:22.711532  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:22.925682  747291 kapi.go:107] duration metric: took 41.509976827s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0311 12:49:22.945022  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:23.211064  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:23.442033  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:23.711726  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:23.942956  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:24.210920  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:24.441526  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:24.711083  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:24.941735  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:25.218042  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:25.442186  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:25.710993  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:25.941764  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:26.210364  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:26.441955  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:26.711542  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:26.946166  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:27.217948  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:27.441573  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:27.712773  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:27.942066  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:28.211200  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:28.441942  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:28.712247  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:28.942649  747291 kapi.go:107] duration metric: took 46.011387477s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0311 12:49:29.210249  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:29.711049  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:30.211577  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:30.711465  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:31.210439  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:31.710397  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:32.211258  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:32.710842  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:33.211754  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:33.712569  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:34.210559  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:34.710669  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:35.210902  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:35.711595  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:36.211164  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:36.711345  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:37.210632  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:37.710891  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:38.210841  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:38.711205  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:39.211498  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:39.710493  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:40.211167  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:40.712228  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:41.210964  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:41.711563  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:42.211864  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:42.715180  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:43.213394  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:43.710388  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:44.210487  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:44.711194  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:45.211618  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:45.710885  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:46.210517  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:46.711248  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:47.210515  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:47.710643  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:48.211650  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:48.710591  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:49.210253  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:49.711955  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:50.211480  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:50.711078  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:51.211292  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:51.710720  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:52.210316  747291 kapi.go:107] duration metric: took 1m8.003457134s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0311 12:49:52.211916  747291 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-109866 cluster.
	I0311 12:49:52.214033  747291 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0311 12:49:52.215566  747291 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0311 12:49:52.217283  747291 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0311 12:49:52.218893  747291 addons.go:505] duration metric: took 1m19.892711s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0311 12:49:52.218941  747291 start.go:245] waiting for cluster config update ...
	I0311 12:49:52.218961  747291 start.go:254] writing updated cluster config ...
	I0311 12:49:52.219271  747291 ssh_runner.go:195] Run: rm -f paused
	I0311 12:49:52.530893  747291 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 12:49:52.532907  747291 out.go:177] * Done! kubectl is now configured to use "addons-109866" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                         ATTEMPT             POD ID              POD
	4bf8c0cb9aee9       dd1b12fcb6097       6 seconds ago       Exited              hello-world-app              2                   20587bff3e003       hello-world-app-5d77478584-cdg5s
	8ce574b71d0a1       be5e6f23a9904       31 seconds ago      Running             nginx                        0                   c7bdeb86e5573       nginx
	7d4c33eca9646       1693ecaac5c76       43 seconds ago      Running             headlamp                     0                   3103615702a44       headlamp-5485c556b-jgglr
	849cf95bb8776       bafe72500920c       2 minutes ago       Running             gcp-auth                     0                   c36c69d1994c7       gcp-auth-5f6b4f85fd-5ltww
	232b2bf74420e       1a024e390dd05       2 minutes ago       Exited              patch                        1                   4e6a266ad4556       ingress-nginx-admission-patch-tq8qj
	d21561d10b3f2       1a024e390dd05       2 minutes ago       Exited              create                       0                   149ef766c4f67       ingress-nginx-admission-create-2lcwn
	042616b30d8e0       20e3f2db01e81       2 minutes ago       Running             yakd                         0                   10703483ce7b9       yakd-dashboard-9947fc6bf-cg9cw
	0c9abe5f0ef0c       4d1e5c3e97420       2 minutes ago       Running             volume-snapshot-controller   0                   f5081c5f44a57       snapshot-controller-58dbcc7b99-glnz7
	d325bdb344e77       4d1e5c3e97420       2 minutes ago       Running             volume-snapshot-controller   0                   56b6a1de92aa2       snapshot-controller-58dbcc7b99-m464j
	4deddb1b3e244       97e04611ad434       3 minutes ago       Running             coredns                      0                   0f28cf1e914c7       coredns-5dd5756b68-k6fgr
	3cbaca7299409       ba04bb24b9575       3 minutes ago       Running             storage-provisioner          0                   f0f892b4050b6       storage-provisioner
	75830bc702c7c       4740c1948d3fc       3 minutes ago       Running             kindnet-cni                  0                   1daf4c11e983d       kindnet-dhnct
	2260a3b94348b       3ca3ca488cf13       3 minutes ago       Running             kube-proxy                   0                   c9b569705eb58       kube-proxy-sbsmh
	d253b8b91fc0b       05c284c929889       3 minutes ago       Running             kube-scheduler               0                   d046b3483e8c7       kube-scheduler-addons-109866
	ce5f4d541da98       9961cbceaf234       3 minutes ago       Running             kube-controller-manager      0                   8848dd3c6ee03       kube-controller-manager-addons-109866
	a6165e0945dce       04b4c447bb9d4       3 minutes ago       Running             kube-apiserver               0                   fcb97d51c231f       kube-apiserver-addons-109866
	fcca5b6d52b25       9cdd6470f48c8       3 minutes ago       Running             etcd                         0                   211516c8a58d4       etcd-addons-109866
	
	
	==> containerd <==
	Mar 11 12:51:49 addons-109866 containerd[757]: time="2024-03-11T12:51:49.987225595Z" level=info msg="StartContainer for \"4bf8c0cb9aee9b6e4eb5b46ec436b7c45554e0de7dae6e11f9aa8222df356c5c\""
	Mar 11 12:51:50 addons-109866 containerd[757]: time="2024-03-11T12:51:50.057420404Z" level=info msg="StartContainer for \"4bf8c0cb9aee9b6e4eb5b46ec436b7c45554e0de7dae6e11f9aa8222df356c5c\" returns successfully"
	Mar 11 12:51:50 addons-109866 containerd[757]: time="2024-03-11T12:51:50.092738697Z" level=info msg="shim disconnected" id=4bf8c0cb9aee9b6e4eb5b46ec436b7c45554e0de7dae6e11f9aa8222df356c5c
	Mar 11 12:51:50 addons-109866 containerd[757]: time="2024-03-11T12:51:50.092923337Z" level=warning msg="cleaning up after shim disconnected" id=4bf8c0cb9aee9b6e4eb5b46ec436b7c45554e0de7dae6e11f9aa8222df356c5c namespace=k8s.io
	Mar 11 12:51:50 addons-109866 containerd[757]: time="2024-03-11T12:51:50.092939427Z" level=info msg="cleaning up dead shim"
	Mar 11 12:51:50 addons-109866 containerd[757]: time="2024-03-11T12:51:50.102204864Z" level=warning msg="cleanup warnings time=\"2024-03-11T12:51:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10737 runtime=io.containerd.runc.v2\n"
	Mar 11 12:51:50 addons-109866 containerd[757]: time="2024-03-11T12:51:50.366170593Z" level=info msg="RemoveContainer for \"9469f6b7228e758a091b7489e620c8d5dae7481beee1654b93737e16bdb9c62a\""
	Mar 11 12:51:50 addons-109866 containerd[757]: time="2024-03-11T12:51:50.373203231Z" level=info msg="RemoveContainer for \"9469f6b7228e758a091b7489e620c8d5dae7481beee1654b93737e16bdb9c62a\" returns successfully"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.092107937Z" level=info msg="Kill container \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\""
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.146718760Z" level=info msg="shim disconnected" id=0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.146790801Z" level=warning msg="cleaning up after shim disconnected" id=0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c namespace=k8s.io
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.146803708Z" level=info msg="cleaning up dead shim"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.155461466Z" level=warning msg="cleanup warnings time=\"2024-03-11T12:51:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10769 runtime=io.containerd.runc.v2\n"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.159335261Z" level=info msg="StopContainer for \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\" returns successfully"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.160078135Z" level=info msg="StopPodSandbox for \"a3570b62cd2bb554d28c6b7c0d680b03769bc11724e0e65ec3960af8d1018359\""
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.160253126Z" level=info msg="Container to stop \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.202707212Z" level=info msg="shim disconnected" id=a3570b62cd2bb554d28c6b7c0d680b03769bc11724e0e65ec3960af8d1018359
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.202774109Z" level=warning msg="cleaning up after shim disconnected" id=a3570b62cd2bb554d28c6b7c0d680b03769bc11724e0e65ec3960af8d1018359 namespace=k8s.io
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.202787122Z" level=info msg="cleaning up dead shim"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.211229102Z" level=warning msg="cleanup warnings time=\"2024-03-11T12:51:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10801 runtime=io.containerd.runc.v2\n"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.261630599Z" level=info msg="TearDown network for sandbox \"a3570b62cd2bb554d28c6b7c0d680b03769bc11724e0e65ec3960af8d1018359\" successfully"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.261855935Z" level=info msg="StopPodSandbox for \"a3570b62cd2bb554d28c6b7c0d680b03769bc11724e0e65ec3960af8d1018359\" returns successfully"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.366402869Z" level=info msg="RemoveContainer for \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\""
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.372588051Z" level=info msg="RemoveContainer for \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\" returns successfully"
	Mar 11 12:51:51 addons-109866 containerd[757]: time="2024-03-11T12:51:51.373234983Z" level=error msg="ContainerStatus for \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\": not found"
	
	
	==> coredns [4deddb1b3e244037464605871f9ffd92cde5acd350edcb7658058f9b4bbfdfc7] <==
	[INFO] 10.244.0.19:43336 - 23917 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051364s
	[INFO] 10.244.0.19:43336 - 6365 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067135s
	[INFO] 10.244.0.19:43336 - 58841 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047458s
	[INFO] 10.244.0.19:43336 - 28531 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049107s
	[INFO] 10.244.0.19:43336 - 49123 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001394959s
	[INFO] 10.244.0.19:43336 - 58899 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00113809s
	[INFO] 10.244.0.19:43336 - 3383 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046531s
	[INFO] 10.244.0.19:52987 - 62485 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000113133s
	[INFO] 10.244.0.19:60089 - 13110 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036874s
	[INFO] 10.244.0.19:52987 - 62066 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037465s
	[INFO] 10.244.0.19:60089 - 11999 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066199s
	[INFO] 10.244.0.19:52987 - 54173 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00019067s
	[INFO] 10.244.0.19:52987 - 13732 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067931s
	[INFO] 10.244.0.19:52987 - 27665 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055614s
	[INFO] 10.244.0.19:52987 - 30485 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052464s
	[INFO] 10.244.0.19:60089 - 33828 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082921s
	[INFO] 10.244.0.19:60089 - 51990 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070539s
	[INFO] 10.244.0.19:52987 - 33704 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001258426s
	[INFO] 10.244.0.19:60089 - 63062 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050166s
	[INFO] 10.244.0.19:52987 - 4405 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000946336s
	[INFO] 10.244.0.19:60089 - 31782 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037571s
	[INFO] 10.244.0.19:52987 - 58340 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048156s
	[INFO] 10.244.0.19:60089 - 32295 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000987132s
	[INFO] 10.244.0.19:60089 - 18431 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00117836s
	[INFO] 10.244.0.19:60089 - 17548 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057961s
	
	
	==> describe nodes <==
	Name:               addons-109866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-109866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=addons-109866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T12_48_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-109866
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 12:48:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-109866
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 12:51:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 12:51:54 +0000   Mon, 11 Mar 2024 12:48:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 12:51:54 +0000   Mon, 11 Mar 2024 12:48:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 12:51:54 +0000   Mon, 11 Mar 2024 12:48:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 12:51:54 +0000   Mon, 11 Mar 2024 12:48:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-109866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bd5453af7c34cb1b04f96a160b0fb4f
	  System UUID:                8e525322-b209-4cf4-bc23-f3ded0274e04
	  Boot ID:                    26506771-5b0e-4b52-8e79-b1a5a7798867
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-cdg5s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  gcp-auth                    gcp-auth-5f6b4f85fd-5ltww                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  headlamp                    headlamp-5485c556b-jgglr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 coredns-5dd5756b68-k6fgr                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m23s
	  kube-system                 etcd-addons-109866                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m36s
	  kube-system                 kindnet-dhnct                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m24s
	  kube-system                 kube-apiserver-addons-109866             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-controller-manager-addons-109866    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-proxy-sbsmh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-scheduler-addons-109866             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 snapshot-controller-58dbcc7b99-glnz7     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 snapshot-controller-58dbcc7b99-m464j     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-cg9cw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     3m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m46s (x8 over 3m46s)  kubelet          Node addons-109866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x8 over 3m46s)  kubelet          Node addons-109866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x7 over 3m46s)  kubelet          Node addons-109866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m36s                  kubelet          Node addons-109866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m36s                  kubelet          Node addons-109866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m36s                  kubelet          Node addons-109866 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m36s                  kubelet          Node addons-109866 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m26s                  kubelet          Node addons-109866 status is now: NodeReady
	  Normal  RegisteredNode           3m25s                  node-controller  Node addons-109866 event: Registered Node addons-109866 in Controller
	
	
	==> dmesg <==
	[  +0.001009] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=0000000083809770
	[  +0.001111] FS-Cache: N-key=[8] '603c5c0100000000'
	[  +0.003066] FS-Cache: Duplicate cookie detected
	[  +0.000736] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=00000000bc2d205d
	[  +0.001189] FS-Cache: O-key=[8] '603c5c0100000000'
	[  +0.000722] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=0000000051073a19
	[  +0.001067] FS-Cache: N-key=[8] '603c5c0100000000'
	[  +2.719188] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001095] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=0000000013ff9938
	[  +0.001167] FS-Cache: O-key=[8] '5f3c5c0100000000'
	[  +0.000709] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=000000008609c792
	[  +0.001120] FS-Cache: N-key=[8] '5f3c5c0100000000'
	[  +0.365130] FS-Cache: Duplicate cookie detected
	[  +0.000776] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001043] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=00000000c4ee4e31
	[  +0.001133] FS-Cache: O-key=[8] '653c5c0100000000'
	[  +0.000748] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000983] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=0000000083809770
	[  +0.001103] FS-Cache: N-key=[8] '653c5c0100000000'
	[Mar11 11:51] hrtimer: interrupt took 2085213 ns
	[Mar11 12:42] systemd-journald[222]: Failed to send WATCHDOG=1 notification message: Connection refused
	
	
	==> etcd [fcca5b6d52b255689a40c57790767057efec952fd6e465b7474d6e3b65546cb1] <==
	{"level":"info","ts":"2024-03-11T12:48:11.548578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-11T12:48:11.548679Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-11T12:48:11.55041Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T12:48:11.550508Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-11T12:48:11.550521Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-11T12:48:11.556278Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T12:48:11.556321Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T12:48:12.420695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-11T12:48:12.420808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-11T12:48:12.420912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-11T12:48:12.42097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T12:48:12.421003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-11T12:48:12.42107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-11T12:48:12.421108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-11T12:48:12.424871Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T12:48:12.427979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-109866 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T12:48:12.428053Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T12:48:12.42881Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T12:48:12.428923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T12:48:12.428992Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T12:48:12.429589Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T12:48:12.42971Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T12:48:12.4301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T12:48:12.430159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T12:48:12.441082Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [849cf95bb87766f0ca68d6a8300e6b17b46db9267afde731140ce9a2396230a5] <==
	2024/03/11 12:49:51 GCP Auth Webhook started!
	2024/03/11 12:50:03 Ready to marshal response ...
	2024/03/11 12:50:03 Ready to write response ...
	2024/03/11 12:50:11 Ready to marshal response ...
	2024/03/11 12:50:11 Ready to write response ...
	2024/03/11 12:50:18 Ready to marshal response ...
	2024/03/11 12:50:18 Ready to write response ...
	2024/03/11 12:50:19 Ready to marshal response ...
	2024/03/11 12:50:19 Ready to write response ...
	2024/03/11 12:50:27 Ready to marshal response ...
	2024/03/11 12:50:27 Ready to write response ...
	2024/03/11 12:50:42 Ready to marshal response ...
	2024/03/11 12:50:42 Ready to write response ...
	2024/03/11 12:51:08 Ready to marshal response ...
	2024/03/11 12:51:08 Ready to write response ...
	2024/03/11 12:51:08 Ready to marshal response ...
	2024/03/11 12:51:08 Ready to write response ...
	2024/03/11 12:51:08 Ready to marshal response ...
	2024/03/11 12:51:08 Ready to write response ...
	2024/03/11 12:51:22 Ready to marshal response ...
	2024/03/11 12:51:22 Ready to write response ...
	2024/03/11 12:51:31 Ready to marshal response ...
	2024/03/11 12:51:31 Ready to write response ...
	
	
	==> kernel <==
	 12:51:56 up  4:34,  0 users,  load average: 0.96, 2.10, 2.66
	Linux addons-109866 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [75830bc702c7c40322b8ece18238ee3ac83b25eb4f1886f166f659962c8ea1cb] <==
	I0311 12:49:55.942080       1 main.go:227] handling current node
	I0311 12:50:05.952394       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:05.952424       1 main.go:227] handling current node
	I0311 12:50:15.956953       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:15.957626       1 main.go:227] handling current node
	I0311 12:50:25.970071       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:25.970115       1 main.go:227] handling current node
	I0311 12:50:35.974566       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:35.974595       1 main.go:227] handling current node
	I0311 12:50:45.987317       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:45.987362       1 main.go:227] handling current node
	I0311 12:50:56.003434       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:56.003467       1 main.go:227] handling current node
	I0311 12:51:06.016478       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:51:06.016511       1 main.go:227] handling current node
	I0311 12:51:16.025277       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:51:16.025308       1 main.go:227] handling current node
	I0311 12:51:26.038207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:51:26.038235       1 main.go:227] handling current node
	I0311 12:51:36.042582       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:51:36.042612       1 main.go:227] handling current node
	I0311 12:51:46.049670       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:51:46.049700       1 main.go:227] handling current node
	I0311 12:51:56.060561       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:51:56.060591       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a6165e0945dcecdd3546773eb735a8a1061006f886d2c105db84c01fe241e0ca] <==
	W0311 12:49:03.532791       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 12:49:03.533095       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 12:49:03.533123       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 12:49:03.535110       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0311 12:49:07.546123       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.165.135:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.165.135:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
	W0311 12:49:07.546258       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 12:49:07.546298       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0311 12:49:07.676253       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0311 12:49:07.698948       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 12:49:07.706578       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 12:49:16.726627       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0311 12:50:06.987502       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400968ff50), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400703ee10), ResponseWriter:(*httpsnoop.rw)(0x400703ee10), Flusher:(*httpsnoop.rw)(0x400703ee10), CloseNotifier:(*httpsnoop.rw)(0x400703ee10), Pusher:(*httpsnoop.rw)(0x400703ee10)}}, encoder:(*versioning.codec)(0x40044d5d60), memAllocator:(*runtime.Allocator)(0x4006d098d8)})
	I0311 12:50:16.725725       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 12:50:20.218434       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0311 12:50:43.733593       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0311 12:51:08.861224       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.249.19"}
	I0311 12:51:16.724835       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 12:51:16.765637       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0311 12:51:16.777681       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0311 12:51:17.803256       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0311 12:51:22.333823       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0311 12:51:22.659101       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.193.152"}
	I0311 12:51:31.324986       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.196.172"}
	
	
	==> kube-controller-manager [ce5f4d541da9803f2e592f5cbc86244d8503fcb630bbb1c3eb41696c53a2d65b] <==
	I0311 12:51:24.725152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="8.689µs"
	I0311 12:51:26.878654       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0311 12:51:27.413315       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 12:51:27.413348       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 12:51:31.115922       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0311 12:51:31.141739       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-cdg5s"
	I0311 12:51:31.164365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.075138ms"
	I0311 12:51:31.213247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.82823ms"
	I0311 12:51:31.213329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.59µs"
	I0311 12:51:31.213422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.053µs"
	I0311 12:51:32.036253       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0311 12:51:32.036291       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 12:51:32.502663       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0311 12:51:32.502836       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 12:51:34.306923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.766µs"
	I0311 12:51:35.305978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.428µs"
	W0311 12:51:36.017330       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 12:51:36.017364       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 12:51:36.327258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.65µs"
	I0311 12:51:48.051217       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0311 12:51:48.053847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="5.202µs"
	I0311 12:51:48.061129       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0311 12:51:49.602941       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 12:51:49.602980       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 12:51:50.373566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.465µs"
	
	
	==> kube-proxy [2260a3b94348bcef7e2bfc11cf30d679d9aca3f41c2c21f9f32f71246a44aaf6] <==
	I0311 12:48:33.865597       1 server_others.go:69] "Using iptables proxy"
	I0311 12:48:33.882941       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0311 12:48:33.909681       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0311 12:48:33.911495       1 server_others.go:152] "Using iptables Proxier"
	I0311 12:48:33.911523       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0311 12:48:33.911529       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0311 12:48:33.911552       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 12:48:33.911744       1 server.go:846] "Version info" version="v1.28.4"
	I0311 12:48:33.911754       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 12:48:33.913072       1 config.go:188] "Starting service config controller"
	I0311 12:48:33.913084       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 12:48:33.913101       1 config.go:97] "Starting endpoint slice config controller"
	I0311 12:48:33.913106       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 12:48:33.913446       1 config.go:315] "Starting node config controller"
	I0311 12:48:33.913452       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 12:48:34.013674       1 shared_informer.go:318] Caches are synced for node config
	I0311 12:48:34.018057       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 12:48:34.018108       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d253b8b91fc0b2588014a884cac6639ef7ef50c2ad7f93a5b5da851bdb34e760] <==
	W0311 12:48:16.956293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 12:48:16.956310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 12:48:16.956388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 12:48:16.956417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 12:48:16.965257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 12:48:16.965299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 12:48:16.965365       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 12:48:16.965394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 12:48:16.965453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 12:48:16.965485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 12:48:17.803370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 12:48:17.803467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 12:48:17.829322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 12:48:17.829661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 12:48:17.928145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 12:48:17.928215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 12:48:17.960317       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 12:48:17.960356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 12:48:17.971198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 12:48:17.971388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 12:48:18.019017       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 12:48:18.019231       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 12:48:18.091221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 12:48:18.091409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0311 12:48:20.137610       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 12:51:35 addons-109866 kubelet[1478]: I0311 12:51:35.314197    1478 scope.go:117] "RemoveContainer" containerID="d6ee104aeb6b95631ee73e236df55ea240ab42af5110ff3bf4bdac283a373522"
	Mar 11 12:51:36 addons-109866 kubelet[1478]: I0311 12:51:36.313175    1478 scope.go:117] "RemoveContainer" containerID="9469f6b7228e758a091b7489e620c8d5dae7481beee1654b93737e16bdb9c62a"
	Mar 11 12:51:36 addons-109866 kubelet[1478]: E0311 12:51:36.313467    1478 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-cdg5s_default(90a2b16f-6ce1-4f6d-b891-ecae64a3aa92)\"" pod="default/hello-world-app-5d77478584-cdg5s" podUID="90a2b16f-6ce1-4f6d-b891-ecae64a3aa92"
	Mar 11 12:51:47 addons-109866 kubelet[1478]: I0311 12:51:47.234234    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57v6h\" (UniqueName: \"kubernetes.io/projected/fd805e6a-7c5e-423b-b249-5bf6eae790f1-kube-api-access-57v6h\") pod \"fd805e6a-7c5e-423b-b249-5bf6eae790f1\" (UID: \"fd805e6a-7c5e-423b-b249-5bf6eae790f1\") "
	Mar 11 12:51:47 addons-109866 kubelet[1478]: I0311 12:51:47.238333    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd805e6a-7c5e-423b-b249-5bf6eae790f1-kube-api-access-57v6h" (OuterVolumeSpecName: "kube-api-access-57v6h") pod "fd805e6a-7c5e-423b-b249-5bf6eae790f1" (UID: "fd805e6a-7c5e-423b-b249-5bf6eae790f1"). InnerVolumeSpecName "kube-api-access-57v6h". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 12:51:47 addons-109866 kubelet[1478]: I0311 12:51:47.335360    1478 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-57v6h\" (UniqueName: \"kubernetes.io/projected/fd805e6a-7c5e-423b-b249-5bf6eae790f1-kube-api-access-57v6h\") on node \"addons-109866\" DevicePath \"\""
	Mar 11 12:51:47 addons-109866 kubelet[1478]: I0311 12:51:47.349122    1478 scope.go:117] "RemoveContainer" containerID="0008c6bccd19e1e203971a12151f6715d2a8b38ca858da0d5df923193b00abe3"
	Mar 11 12:51:47 addons-109866 kubelet[1478]: I0311 12:51:47.958271    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fd805e6a-7c5e-423b-b249-5bf6eae790f1" path="/var/lib/kubelet/pods/fd805e6a-7c5e-423b-b249-5bf6eae790f1/volumes"
	Mar 11 12:51:49 addons-109866 kubelet[1478]: I0311 12:51:49.957163    1478 scope.go:117] "RemoveContainer" containerID="9469f6b7228e758a091b7489e620c8d5dae7481beee1654b93737e16bdb9c62a"
	Mar 11 12:51:49 addons-109866 kubelet[1478]: I0311 12:51:49.959822    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b0724f57-4ee5-4a51-a8fd-b9385e563285" path="/var/lib/kubelet/pods/b0724f57-4ee5-4a51-a8fd-b9385e563285/volumes"
	Mar 11 12:51:49 addons-109866 kubelet[1478]: I0311 12:51:49.960799    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cc9f17fb-9208-448e-b4fe-f2c2d9db63cd" path="/var/lib/kubelet/pods/cc9f17fb-9208-448e-b4fe-f2c2d9db63cd/volumes"
	Mar 11 12:51:50 addons-109866 kubelet[1478]: I0311 12:51:50.358880    1478 scope.go:117] "RemoveContainer" containerID="9469f6b7228e758a091b7489e620c8d5dae7481beee1654b93737e16bdb9c62a"
	Mar 11 12:51:50 addons-109866 kubelet[1478]: I0311 12:51:50.359278    1478 scope.go:117] "RemoveContainer" containerID="4bf8c0cb9aee9b6e4eb5b46ec436b7c45554e0de7dae6e11f9aa8222df356c5c"
	Mar 11 12:51:50 addons-109866 kubelet[1478]: E0311 12:51:50.359578    1478 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-cdg5s_default(90a2b16f-6ce1-4f6d-b891-ecae64a3aa92)\"" pod="default/hello-world-app-5d77478584-cdg5s" podUID="90a2b16f-6ce1-4f6d-b891-ecae64a3aa92"
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.357505    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d909a96-398f-47ac-a95e-cf1073646919-webhook-cert\") pod \"7d909a96-398f-47ac-a95e-cf1073646919\" (UID: \"7d909a96-398f-47ac-a95e-cf1073646919\") "
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.357565    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlnsw\" (UniqueName: \"kubernetes.io/projected/7d909a96-398f-47ac-a95e-cf1073646919-kube-api-access-dlnsw\") pod \"7d909a96-398f-47ac-a95e-cf1073646919\" (UID: \"7d909a96-398f-47ac-a95e-cf1073646919\") "
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.359517    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d909a96-398f-47ac-a95e-cf1073646919-kube-api-access-dlnsw" (OuterVolumeSpecName: "kube-api-access-dlnsw") pod "7d909a96-398f-47ac-a95e-cf1073646919" (UID: "7d909a96-398f-47ac-a95e-cf1073646919"). InnerVolumeSpecName "kube-api-access-dlnsw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.360791    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d909a96-398f-47ac-a95e-cf1073646919-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7d909a96-398f-47ac-a95e-cf1073646919" (UID: "7d909a96-398f-47ac-a95e-cf1073646919"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.363045    1478 scope.go:117] "RemoveContainer" containerID="0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c"
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.372905    1478 scope.go:117] "RemoveContainer" containerID="0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c"
	Mar 11 12:51:51 addons-109866 kubelet[1478]: E0311 12:51:51.373520    1478 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\": not found" containerID="0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c"
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.373573    1478 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c"} err="failed to get container status \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ff4d333d12e0de7a63e21b7b6f56b21a50b8865832ef2266b0f109f903f858c\": not found"
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.458282    1478 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d909a96-398f-47ac-a95e-cf1073646919-webhook-cert\") on node \"addons-109866\" DevicePath \"\""
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.458336    1478 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dlnsw\" (UniqueName: \"kubernetes.io/projected/7d909a96-398f-47ac-a95e-cf1073646919-kube-api-access-dlnsw\") on node \"addons-109866\" DevicePath \"\""
	Mar 11 12:51:51 addons-109866 kubelet[1478]: I0311 12:51:51.958787    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7d909a96-398f-47ac-a95e-cf1073646919" path="/var/lib/kubelet/pods/7d909a96-398f-47ac-a95e-cf1073646919/volumes"
	
	
	==> storage-provisioner [3cbaca72994090d255ccee0e7e99140a0718028d4c82d05eba0fa1f2230e6ef8] <==
	I0311 12:48:39.024053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 12:48:39.111181       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 12:48:39.111291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 12:48:39.220507       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 12:48:39.220688       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-109866_b52c2dbf-066e-42f7-a420-bc0218bbc508!
	I0311 12:48:39.221713       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8f346508-6b10-4a09-bec9-d1b8b9d85914", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-109866_b52c2dbf-066e-42f7-a420-bc0218bbc508 became leader
	I0311 12:48:39.528884       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-109866_b52c2dbf-066e-42f7-a420-bc0218bbc508!
	E0311 12:50:27.693593       1 controller.go:1050] claim "28835a80-bbb1-42b9-a246-925c8b10c615" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-109866 -n addons-109866
helpers_test.go:261: (dbg) Run:  kubectl --context addons-109866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (35.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 51.59822ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-109866 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-109866 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [491fca5e-20ed-446a-be2f-652a12d5e889] Pending
helpers_test.go:344: "task-pv-pod" [491fca5e-20ed-446a-be2f-652a12d5e889] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [491fca5e-20ed-446a-be2f-652a12d5e889] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.006481384s
addons_test.go:584: (dbg) Run:  kubectl --context addons-109866 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-109866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-109866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-109866 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-109866 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-109866 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-109866 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e683e2ba-bbbe-42b7-a383-7e8b8b20b3c5] Pending
helpers_test.go:344: "task-pv-pod-restore" [e683e2ba-bbbe-42b7-a383-7e8b8b20b3c5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e683e2ba-bbbe-42b7-a383-7e8b8b20b3c5] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003487283s
addons_test.go:626: (dbg) Run:  kubectl --context addons-109866 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-109866 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-109866 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-109866 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.782054103s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-109866 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (648.592583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 12:50:58.363753  757427 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:50:58.364515  757427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:50:58.364529  757427 out.go:304] Setting ErrFile to fd 2...
	I0311 12:50:58.364535  757427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:50:58.364861  757427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 12:50:58.365166  757427 mustload.go:65] Loading cluster: addons-109866
	I0311 12:50:58.365547  757427 config.go:182] Loaded profile config "addons-109866": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:50:58.365570  757427 addons.go:597] checking whether the cluster is paused
	I0311 12:50:58.365676  757427 config.go:182] Loaded profile config "addons-109866": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:50:58.365695  757427 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:50:58.366175  757427 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:50:58.382398  757427 ssh_runner.go:195] Run: systemctl --version
	I0311 12:50:58.382465  757427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:50:58.399452  757427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:50:58.489128  757427 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0311 12:50:58.489215  757427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 12:50:58.534953  757427 cri.go:89] found id: "d6ee104aeb6b95631ee73e236df55ea240ab42af5110ff3bf4bdac283a373522"
	I0311 12:50:58.534975  757427 cri.go:89] found id: "0c9abe5f0ef0cf82ae6d2f61e4d4aadf5b41a9f3e9550eaccb62fc4a339f2436"
	I0311 12:50:58.534980  757427 cri.go:89] found id: "d325bdb344e77789d422eba6247f1d5361e15ccd742f43bfa02df9579ae6f9cf"
	I0311 12:50:58.534984  757427 cri.go:89] found id: "9725ec16da0a739180719f8e3b1be66d0b9606b97c30f153b890a49b32601a55"
	I0311 12:50:58.534988  757427 cri.go:89] found id: "4deddb1b3e244037464605871f9ffd92cde5acd350edcb7658058f9b4bbfdfc7"
	I0311 12:50:58.534991  757427 cri.go:89] found id: "3cbaca72994090d255ccee0e7e99140a0718028d4c82d05eba0fa1f2230e6ef8"
	I0311 12:50:58.534994  757427 cri.go:89] found id: "75830bc702c7c40322b8ece18238ee3ac83b25eb4f1886f166f659962c8ea1cb"
	I0311 12:50:58.534998  757427 cri.go:89] found id: "2260a3b94348bcef7e2bfc11cf30d679d9aca3f41c2c21f9f32f71246a44aaf6"
	I0311 12:50:58.535001  757427 cri.go:89] found id: "d253b8b91fc0b2588014a884cac6639ef7ef50c2ad7f93a5b5da851bdb34e760"
	I0311 12:50:58.535007  757427 cri.go:89] found id: "ce5f4d541da9803f2e592f5cbc86244d8503fcb630bbb1c3eb41696c53a2d65b"
	I0311 12:50:58.535011  757427 cri.go:89] found id: "a6165e0945dcecdd3546773eb735a8a1061006f886d2c105db84c01fe241e0ca"
	I0311 12:50:58.535014  757427 cri.go:89] found id: "fcca5b6d52b255689a40c57790767057efec952fd6e465b7474d6e3b65546cb1"
	I0311 12:50:58.535018  757427 cri.go:89] found id: ""
	I0311 12:50:58.535076  757427 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0311 12:50:58.616330  757427 out.go:177] 
	W0311 12:50:58.618016  757427 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-11T12:50:58Z" level=error msg="stat /run/containerd/runc/k8s.io/dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-11T12:50:58Z" level=error msg="stat /run/containerd/runc/k8s.io/dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451: no such file or directory"
	
	W0311 12:50:58.618049  757427 out.go:239] * 
	* 
	W0311 12:50:58.940777  757427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 12:50:58.943128  757427 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:644: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-109866 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-109866
helpers_test.go:235: (dbg) docker inspect addons-109866:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d",
	        "Created": "2024-03-11T12:47:54.055346443Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 747758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-11T12:47:54.367201973Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d/hosts",
	        "LogPath": "/var/lib/docker/containers/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d/53572c512cfbf12e4092877c5d5db153607aff91742e744e1f978803e552f09d-json.log",
	        "Name": "/addons-109866",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-109866:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-109866",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/142c08e208268212c96ed1c5ca80c49e40d70b844e297dd4d382cf0169a2b2ba-init/diff:/var/lib/docker/overlay2/361ff7146c1f8f9f5c07c69a78aa76c291e59293e7654dd235648b6a877bb54d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/142c08e208268212c96ed1c5ca80c49e40d70b844e297dd4d382cf0169a2b2ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/142c08e208268212c96ed1c5ca80c49e40d70b844e297dd4d382cf0169a2b2ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/142c08e208268212c96ed1c5ca80c49e40d70b844e297dd4d382cf0169a2b2ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-109866",
	                "Source": "/var/lib/docker/volumes/addons-109866/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-109866",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-109866",
	                "name.minikube.sigs.k8s.io": "addons-109866",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a21c945f9a0ebc9c10525fc36801b5b72743725b6baeedb03335921c78a575e6",
	            "SandboxKey": "/var/run/docker/netns/a21c945f9a0e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33743"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33742"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33739"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33741"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33740"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-109866": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "53572c512cfb",
	                        "addons-109866"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "ab1cf46bde62886ba4c93b59c9a4335370cc92462b41b31afd1f8a70abc84310",
	                    "EndpointID": "28960e5aab80580d2b356d6c37c1ff6d4036ccfbd32f9fed0d11de957c5f4157",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-109866",
	                        "53572c512cfb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-109866 -n addons-109866
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-109866 logs -n 25: (1.73534346s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-568522                                                                     | download-only-568522   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | -o=json --download-only                                                                     | download-only-228434   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | -p download-only-228434                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-228434                                                                     | download-only-228434   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | -o=json --download-only                                                                     | download-only-628520   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | -p download-only-628520                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-628520                                                                     | download-only-628520   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-568522                                                                     | download-only-568522   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-228434                                                                     | download-only-228434   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-628520                                                                     | download-only-628520   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | --download-only -p                                                                          | download-docker-201665 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | download-docker-201665                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-201665                                                                   | download-docker-201665 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-995452   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | binary-mirror-995452                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34573                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-995452                                                                     | binary-mirror-995452   | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | addons-109866                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | addons-109866                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-109866 --wait=true                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-109866 ip                                                                            | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	| addons  | addons-109866 addons disable                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	|         | -p addons-109866                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-109866 ssh cat                                                                       | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	|         | /opt/local-path-provisioner/pvc-28835a80-bbb1-42b9-a246-925c8b10c615_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-109866 addons disable                                                                | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109866 addons                                                                        | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC | 11 Mar 24 12:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109866 addons                                                                        | addons-109866          | jenkins | v1.32.0 | 11 Mar 24 12:50 UTC |                     |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:47:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:47:30.505625  747291 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:47:30.505772  747291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:30.505783  747291 out.go:304] Setting ErrFile to fd 2...
	I0311 12:47:30.505789  747291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:30.506024  747291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 12:47:30.506486  747291 out.go:298] Setting JSON to false
	I0311 12:47:30.507360  747291 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16195,"bootTime":1710145056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 12:47:30.507434  747291 start.go:139] virtualization:  
	I0311 12:47:30.511718  747291 out.go:177] * [addons-109866] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:47:30.514433  747291 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 12:47:30.516202  747291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:47:30.514447  747291 notify.go:220] Checking for updates...
	I0311 12:47:30.518284  747291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:47:30.520517  747291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 12:47:30.522463  747291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 12:47:30.524331  747291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 12:47:30.526767  747291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:47:30.547159  747291 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:47:30.547281  747291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:30.619042  747291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:47:30.609975253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:30.619154  747291 docker.go:295] overlay module found
	I0311 12:47:30.621840  747291 out.go:177] * Using the docker driver based on user configuration
	I0311 12:47:30.623441  747291 start.go:297] selected driver: docker
	I0311 12:47:30.623458  747291 start.go:901] validating driver "docker" against <nil>
	I0311 12:47:30.623471  747291 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 12:47:30.624094  747291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:30.677742  747291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:47:30.668868648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:30.677920  747291 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:47:30.678161  747291 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 12:47:30.680581  747291 out.go:177] * Using Docker driver with root privileges
	I0311 12:47:30.682818  747291 cni.go:84] Creating CNI manager for ""
	I0311 12:47:30.682841  747291 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:47:30.682853  747291 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:47:30.682935  747291 start.go:340] cluster config:
	{Name:addons-109866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:47:30.685301  747291 out.go:177] * Starting "addons-109866" primary control-plane node in "addons-109866" cluster
	I0311 12:47:30.687504  747291 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 12:47:30.689781  747291 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:47:30.691961  747291 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 12:47:30.692025  747291 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0311 12:47:30.692038  747291 cache.go:56] Caching tarball of preloaded images
	I0311 12:47:30.692051  747291 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:47:30.692122  747291 preload.go:173] Found /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 12:47:30.692132  747291 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0311 12:47:30.692500  747291 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/config.json ...
	I0311 12:47:30.692566  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/config.json: {Name:mk0a8adc75169f20147b340b95375672a0f5ea0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:47:30.707150  747291 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:47:30.707274  747291 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:47:30.707304  747291 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 12:47:30.707323  747291 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 12:47:30.707338  747291 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 12:47:30.707344  747291 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0311 12:47:46.860177  747291 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0311 12:47:46.860220  747291 cache.go:194] Successfully downloaded all kic artifacts
	I0311 12:47:46.860251  747291 start.go:360] acquireMachinesLock for addons-109866: {Name:mkdf0c11320566f0571b3fb5c40daf88466f431d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 12:47:46.861162  747291 start.go:364] duration metric: took 886.928µs to acquireMachinesLock for "addons-109866"
	I0311 12:47:46.861208  747291 start.go:93] Provisioning new machine with config: &{Name:addons-109866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0311 12:47:46.861307  747291 start.go:125] createHost starting for "" (driver="docker")
	I0311 12:47:46.863453  747291 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0311 12:47:46.863711  747291 start.go:159] libmachine.API.Create for "addons-109866" (driver="docker")
	I0311 12:47:46.863754  747291 client.go:168] LocalClient.Create starting
	I0311 12:47:46.863880  747291 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem
	I0311 12:47:47.012226  747291 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem
	I0311 12:47:47.789569  747291 cli_runner.go:164] Run: docker network inspect addons-109866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0311 12:47:47.807956  747291 cli_runner.go:211] docker network inspect addons-109866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0311 12:47:47.808053  747291 network_create.go:281] running [docker network inspect addons-109866] to gather additional debugging logs...
	I0311 12:47:47.808078  747291 cli_runner.go:164] Run: docker network inspect addons-109866
	W0311 12:47:47.823516  747291 cli_runner.go:211] docker network inspect addons-109866 returned with exit code 1
	I0311 12:47:47.823549  747291 network_create.go:284] error running [docker network inspect addons-109866]: docker network inspect addons-109866: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-109866 not found
	I0311 12:47:47.823576  747291 network_create.go:286] output of [docker network inspect addons-109866]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-109866 not found
	
	** /stderr **
	I0311 12:47:47.823688  747291 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 12:47:47.839307  747291 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40026949c0}
	I0311 12:47:47.839348  747291 network_create.go:124] attempt to create docker network addons-109866 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0311 12:47:47.839412  747291 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-109866 addons-109866
	I0311 12:47:47.908199  747291 network_create.go:108] docker network addons-109866 192.168.49.0/24 created
	I0311 12:47:47.908233  747291 kic.go:121] calculated static IP "192.168.49.2" for the "addons-109866" container
	I0311 12:47:47.908305  747291 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0311 12:47:47.921970  747291 cli_runner.go:164] Run: docker volume create addons-109866 --label name.minikube.sigs.k8s.io=addons-109866 --label created_by.minikube.sigs.k8s.io=true
	I0311 12:47:47.938316  747291 oci.go:103] Successfully created a docker volume addons-109866
	I0311 12:47:47.938413  747291 cli_runner.go:164] Run: docker run --rm --name addons-109866-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109866 --entrypoint /usr/bin/test -v addons-109866:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0311 12:47:49.793841  747291 cli_runner.go:217] Completed: docker run --rm --name addons-109866-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109866 --entrypoint /usr/bin/test -v addons-109866:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (1.855384667s)
	I0311 12:47:49.793875  747291 oci.go:107] Successfully prepared a docker volume addons-109866
	I0311 12:47:49.793909  747291 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 12:47:49.793931  747291 kic.go:194] Starting extracting preloaded images to volume ...
	I0311 12:47:49.794018  747291 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-109866:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0311 12:47:53.975614  747291 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-109866:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.181547685s)
	I0311 12:47:53.975646  747291 kic.go:203] duration metric: took 4.181711172s to extract preloaded images to volume ...
	W0311 12:47:53.975796  747291 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0311 12:47:53.975920  747291 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0311 12:47:54.040715  747291 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-109866 --name addons-109866 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109866 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-109866 --network addons-109866 --ip 192.168.49.2 --volume addons-109866:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0311 12:47:54.376115  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Running}}
	I0311 12:47:54.402847  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:47:54.426091  747291 cli_runner.go:164] Run: docker exec addons-109866 stat /var/lib/dpkg/alternatives/iptables
	I0311 12:47:54.494091  747291 oci.go:144] the created container "addons-109866" has a running status.
	I0311 12:47:54.494118  747291 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa...
	I0311 12:47:55.140189  747291 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0311 12:47:55.168592  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:47:55.205259  747291 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0311 12:47:55.205279  747291 kic_runner.go:114] Args: [docker exec --privileged addons-109866 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0311 12:47:55.273314  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:47:55.295900  747291 machine.go:94] provisionDockerMachine start ...
	I0311 12:47:55.296053  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:55.318748  747291 main.go:141] libmachine: Using SSH client type: native
	I0311 12:47:55.319025  747291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0311 12:47:55.319041  747291 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 12:47:55.456536  747291 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-109866
	
	I0311 12:47:55.456564  747291 ubuntu.go:169] provisioning hostname "addons-109866"
	I0311 12:47:55.456628  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:55.477723  747291 main.go:141] libmachine: Using SSH client type: native
	I0311 12:47:55.477976  747291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0311 12:47:55.477992  747291 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-109866 && echo "addons-109866" | sudo tee /etc/hostname
	I0311 12:47:55.621233  747291 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-109866
	
	I0311 12:47:55.621415  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:55.637651  747291 main.go:141] libmachine: Using SSH client type: native
	I0311 12:47:55.637906  747291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0311 12:47:55.637929  747291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-109866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-109866/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-109866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 12:47:55.764686  747291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 12:47:55.764720  747291 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18350-741028/.minikube CaCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18350-741028/.minikube}
	I0311 12:47:55.764772  747291 ubuntu.go:177] setting up certificates
	I0311 12:47:55.764782  747291 provision.go:84] configureAuth start
	I0311 12:47:55.764847  747291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109866
	I0311 12:47:55.781077  747291 provision.go:143] copyHostCerts
	I0311 12:47:55.781157  747291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/ca.pem (1078 bytes)
	I0311 12:47:55.781292  747291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/cert.pem (1123 bytes)
	I0311 12:47:55.781403  747291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/key.pem (1675 bytes)
	I0311 12:47:55.781465  747291 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem org=jenkins.addons-109866 san=[127.0.0.1 192.168.49.2 addons-109866 localhost minikube]
	I0311 12:47:57.414534  747291 provision.go:177] copyRemoteCerts
	I0311 12:47:57.414616  747291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 12:47:57.414660  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.434357  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:57.529828  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 12:47:57.554345  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 12:47:57.577870  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0311 12:47:57.601233  747291 provision.go:87] duration metric: took 1.836423002s to configureAuth
	I0311 12:47:57.601264  747291 ubuntu.go:193] setting minikube options for container-runtime
	I0311 12:47:57.601458  747291 config.go:182] Loaded profile config "addons-109866": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:47:57.601472  747291 machine.go:97] duration metric: took 2.305513654s to provisionDockerMachine
	I0311 12:47:57.601479  747291 client.go:171] duration metric: took 10.737714993s to LocalClient.Create
	I0311 12:47:57.601504  747291 start.go:167] duration metric: took 10.737789733s to libmachine.API.Create "addons-109866"
	I0311 12:47:57.601517  747291 start.go:293] postStartSetup for "addons-109866" (driver="docker")
	I0311 12:47:57.601527  747291 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 12:47:57.601583  747291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 12:47:57.601626  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.617023  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:57.713972  747291 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 12:47:57.716973  747291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0311 12:47:57.717013  747291 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0311 12:47:57.717025  747291 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0311 12:47:57.717032  747291 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0311 12:47:57.717042  747291 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-741028/.minikube/addons for local assets ...
	I0311 12:47:57.717103  747291 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-741028/.minikube/files for local assets ...
	I0311 12:47:57.717131  747291 start.go:296] duration metric: took 115.608999ms for postStartSetup
	I0311 12:47:57.717437  747291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109866
	I0311 12:47:57.732379  747291 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/config.json ...
	I0311 12:47:57.732667  747291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 12:47:57.732721  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.748243  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:57.837649  747291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0311 12:47:57.842110  747291 start.go:128] duration metric: took 10.980788007s to createHost
	I0311 12:47:57.842135  747291 start.go:83] releasing machines lock for "addons-109866", held for 10.980951707s
	I0311 12:47:57.842206  747291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109866
	I0311 12:47:57.860240  747291 ssh_runner.go:195] Run: cat /version.json
	I0311 12:47:57.860259  747291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 12:47:57.860293  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.860329  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:47:57.876740  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:57.880925  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:47:58.079192  747291 ssh_runner.go:195] Run: systemctl --version
	I0311 12:47:58.083740  747291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 12:47:58.088212  747291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0311 12:47:58.113913  747291 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0311 12:47:58.113991  747291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 12:47:58.144010  747291 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0311 12:47:58.144035  747291 start.go:494] detecting cgroup driver to use...
	I0311 12:47:58.144067  747291 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0311 12:47:58.144121  747291 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 12:47:58.159780  747291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 12:47:58.171356  747291 docker.go:217] disabling cri-docker service (if available) ...
	I0311 12:47:58.171432  747291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 12:47:58.185491  747291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 12:47:58.200151  747291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 12:47:58.294349  747291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 12:47:58.384987  747291 docker.go:233] disabling docker service ...
	I0311 12:47:58.385058  747291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 12:47:58.405114  747291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 12:47:58.417827  747291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 12:47:58.499644  747291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 12:47:58.588300  747291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 12:47:58.600088  747291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 12:47:58.617729  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0311 12:47:58.628239  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 12:47:58.638765  747291 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 12:47:58.638842  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 12:47:58.649346  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 12:47:58.659167  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 12:47:58.669173  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 12:47:58.678887  747291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 12:47:58.688233  747291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 12:47:58.698192  747291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 12:47:58.706717  747291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 12:47:58.714903  747291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:47:58.797055  747291 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 12:47:58.923652  747291 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0311 12:47:58.923806  747291 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0311 12:47:58.927468  747291 start.go:562] Will wait 60s for crictl version
	I0311 12:47:58.927571  747291 ssh_runner.go:195] Run: which crictl
	I0311 12:47:58.930877  747291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 12:47:58.968628  747291 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0311 12:47:58.968783  747291 ssh_runner.go:195] Run: containerd --version
	I0311 12:47:58.991023  747291 ssh_runner.go:195] Run: containerd --version
	I0311 12:47:59.016308  747291 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0311 12:47:59.018512  747291 cli_runner.go:164] Run: docker network inspect addons-109866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 12:47:59.033644  747291 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0311 12:47:59.037292  747291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 12:47:59.048551  747291 kubeadm.go:877] updating cluster {Name:addons-109866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 12:47:59.048686  747291 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 12:47:59.048781  747291 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 12:47:59.089461  747291 containerd.go:612] all images are preloaded for containerd runtime.
	I0311 12:47:59.089486  747291 containerd.go:519] Images already preloaded, skipping extraction
	I0311 12:47:59.089563  747291 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 12:47:59.133163  747291 containerd.go:612] all images are preloaded for containerd runtime.
	I0311 12:47:59.133187  747291 cache_images.go:84] Images are preloaded, skipping loading
	I0311 12:47:59.133196  747291 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0311 12:47:59.133305  747291 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-109866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 12:47:59.133381  747291 ssh_runner.go:195] Run: sudo crictl info
	I0311 12:47:59.171920  747291 cni.go:84] Creating CNI manager for ""
	I0311 12:47:59.171942  747291 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:47:59.171952  747291 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 12:47:59.171996  747291 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-109866 NodeName:addons-109866 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 12:47:59.172154  747291 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-109866"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 12:47:59.172243  747291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 12:47:59.181191  747291 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 12:47:59.181268  747291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 12:47:59.190102  747291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0311 12:47:59.209291  747291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 12:47:59.228198  747291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0311 12:47:59.245976  747291 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0311 12:47:59.249326  747291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 12:47:59.260383  747291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:47:59.339134  747291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 12:47:59.355705  747291 certs.go:68] Setting up /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866 for IP: 192.168.49.2
	I0311 12:47:59.355739  747291 certs.go:194] generating shared ca certs ...
	I0311 12:47:59.355765  747291 certs.go:226] acquiring lock for ca certs: {Name:mk7162cd9946a461c84d2f2cea8ea4b87fd89373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:47:59.356526  747291 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key
	I0311 12:48:00.155957  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt ...
	I0311 12:48:00.156047  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt: {Name:mk744f20428760534dc1f0336237227fcabf7e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:00.157160  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key ...
	I0311 12:48:00.157197  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key: {Name:mkf26fd7f704dd60e3b2ddf58fe11aa885997f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:00.157310  747291 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key
	I0311 12:48:00.630864  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.crt ...
	I0311 12:48:00.630899  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.crt: {Name:mkbd4d73ba5b09247f7c9e4c991c1710cde5a749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:00.631656  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key ...
	I0311 12:48:00.631676  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key: {Name:mk75f2fa80c73336760282c57396731158542a7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:00.631778  747291 certs.go:256] generating profile certs ...
	I0311 12:48:00.631842  747291 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.key
	I0311 12:48:00.631861  747291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt with IP's: []
	I0311 12:48:01.305584  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt ...
	I0311 12:48:01.305616  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: {Name:mk722faa99f631f5601c07375460df3ca3f77ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:01.306267  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.key ...
	I0311 12:48:01.306285  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.key: {Name:mkc56fa78cee61ce4570887853684dfd4d7779d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:01.306897  747291 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key.aa9b1a7a
	I0311 12:48:01.306920  747291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt.aa9b1a7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0311 12:48:01.940123  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt.aa9b1a7a ...
	I0311 12:48:01.940154  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt.aa9b1a7a: {Name:mk74b6dc1ab00e4a43e06868d843c31717321777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:01.940903  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key.aa9b1a7a ...
	I0311 12:48:01.940923  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key.aa9b1a7a: {Name:mkcf67507182a282c6efc2bf09d1da75223cfdd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:01.941026  747291 certs.go:381] copying /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt.aa9b1a7a -> /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt
	I0311 12:48:01.941113  747291 certs.go:385] copying /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key.aa9b1a7a -> /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key
	I0311 12:48:01.941169  747291 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.key
	I0311 12:48:01.941192  747291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.crt with IP's: []
	I0311 12:48:02.383863  747291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.crt ...
	I0311 12:48:02.383896  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.crt: {Name:mkb1568d40c68b54a58602a4529a275b6bc990dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:02.384719  747291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.key ...
	I0311 12:48:02.384739  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.key: {Name:mka38203c5ed6bd642af86623754e20813388646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:02.385834  747291 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 12:48:02.385880  747291 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem (1078 bytes)
	I0311 12:48:02.385914  747291 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem (1123 bytes)
	I0311 12:48:02.385941  747291 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem (1675 bytes)
	I0311 12:48:02.386592  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 12:48:02.411584  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0311 12:48:02.435836  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 12:48:02.460093  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0311 12:48:02.484169  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0311 12:48:02.509101  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 12:48:02.533590  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 12:48:02.558587  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 12:48:02.582364  747291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 12:48:02.606448  747291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 12:48:02.624283  747291 ssh_runner.go:195] Run: openssl version
	I0311 12:48:02.629893  747291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 12:48:02.639445  747291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:48:02.642951  747291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 12:48 /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:48:02.643040  747291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:48:02.650196  747291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 12:48:02.660112  747291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 12:48:02.664155  747291 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 12:48:02.664211  747291 kubeadm.go:391] StartCluster: {Name:addons-109866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-109866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:48:02.664303  747291 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0311 12:48:02.664360  747291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 12:48:02.722093  747291 cri.go:89] found id: ""
	I0311 12:48:02.722163  747291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 12:48:02.732440  747291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 12:48:02.741880  747291 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0311 12:48:02.741979  747291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 12:48:02.752537  747291 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 12:48:02.752559  747291 kubeadm.go:156] found existing configuration files:
	
	I0311 12:48:02.752626  747291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 12:48:02.761153  747291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 12:48:02.761218  747291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 12:48:02.769438  747291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 12:48:02.778331  747291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 12:48:02.778402  747291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 12:48:02.786823  747291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 12:48:02.795413  747291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 12:48:02.795526  747291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 12:48:02.803890  747291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 12:48:02.812967  747291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 12:48:02.813056  747291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 12:48:02.821426  747291 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0311 12:48:02.864396  747291 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 12:48:02.864452  747291 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 12:48:02.902708  747291 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0311 12:48:02.902780  747291 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0311 12:48:02.902815  747291 kubeadm.go:309] OS: Linux
	I0311 12:48:02.902859  747291 kubeadm.go:309] CGROUPS_CPU: enabled
	I0311 12:48:02.902906  747291 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0311 12:48:02.902951  747291 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0311 12:48:02.902997  747291 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0311 12:48:02.903043  747291 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0311 12:48:02.903089  747291 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0311 12:48:02.903143  747291 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0311 12:48:02.903189  747291 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0311 12:48:02.903234  747291 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0311 12:48:02.975655  747291 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 12:48:02.975768  747291 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 12:48:02.975859  747291 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 12:48:03.214949  747291 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 12:48:03.218249  747291 out.go:204]   - Generating certificates and keys ...
	I0311 12:48:03.218412  747291 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 12:48:03.218512  747291 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 12:48:03.401767  747291 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 12:48:03.803591  747291 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 12:48:04.578095  747291 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 12:48:05.044240  747291 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 12:48:05.408934  747291 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 12:48:05.409112  747291 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-109866 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0311 12:48:05.793941  747291 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 12:48:05.794323  747291 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-109866 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0311 12:48:06.413978  747291 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 12:48:07.440071  747291 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 12:48:07.859558  747291 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 12:48:07.859920  747291 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 12:48:08.202034  747291 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 12:48:08.458612  747291 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 12:48:08.627783  747291 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 12:48:08.816841  747291 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 12:48:08.817418  747291 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 12:48:08.820031  747291 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 12:48:08.822668  747291 out.go:204]   - Booting up control plane ...
	I0311 12:48:08.822769  747291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 12:48:08.822847  747291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 12:48:08.824396  747291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 12:48:08.836220  747291 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 12:48:08.837095  747291 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 12:48:08.837354  747291 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 12:48:08.930549  747291 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 12:48:18.439209  747291 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.508723 seconds
	I0311 12:48:18.439339  747291 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 12:48:18.455990  747291 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 12:48:18.983466  747291 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 12:48:18.983696  747291 kubeadm.go:309] [mark-control-plane] Marking the node addons-109866 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 12:48:19.511424  747291 kubeadm.go:309] [bootstrap-token] Using token: bkf7y5.hnryfcxu9keivkxu
	I0311 12:48:19.513519  747291 out.go:204]   - Configuring RBAC rules ...
	I0311 12:48:19.513654  747291 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 12:48:19.533924  747291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 12:48:19.543892  747291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 12:48:19.548320  747291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 12:48:19.552141  747291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 12:48:19.557717  747291 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 12:48:19.570670  747291 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 12:48:19.813965  747291 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 12:48:19.939683  747291 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 12:48:19.941532  747291 kubeadm.go:309] 
	I0311 12:48:19.941618  747291 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 12:48:19.941627  747291 kubeadm.go:309] 
	I0311 12:48:19.941719  747291 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 12:48:19.941733  747291 kubeadm.go:309] 
	I0311 12:48:19.941760  747291 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 12:48:19.941821  747291 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 12:48:19.941874  747291 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 12:48:19.941883  747291 kubeadm.go:309] 
	I0311 12:48:19.941935  747291 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 12:48:19.941943  747291 kubeadm.go:309] 
	I0311 12:48:19.941989  747291 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 12:48:19.941998  747291 kubeadm.go:309] 
	I0311 12:48:19.942048  747291 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 12:48:19.942125  747291 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 12:48:19.942194  747291 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 12:48:19.942203  747291 kubeadm.go:309] 
	I0311 12:48:19.942300  747291 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 12:48:19.942379  747291 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 12:48:19.942387  747291 kubeadm.go:309] 
	I0311 12:48:19.942467  747291 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bkf7y5.hnryfcxu9keivkxu \
	I0311 12:48:19.942570  747291 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8388c3333519d9f29bb1cc52e18797f4b748e4ad292cdfee8cd4632271dbee8 \
	I0311 12:48:19.942593  747291 kubeadm.go:309] 	--control-plane 
	I0311 12:48:19.942600  747291 kubeadm.go:309] 
	I0311 12:48:19.942681  747291 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 12:48:19.942691  747291 kubeadm.go:309] 
	I0311 12:48:19.942770  747291 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bkf7y5.hnryfcxu9keivkxu \
	I0311 12:48:19.942872  747291 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c8388c3333519d9f29bb1cc52e18797f4b748e4ad292cdfee8cd4632271dbee8 
	I0311 12:48:19.946506  747291 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0311 12:48:19.946626  747291 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 12:48:19.946650  747291 cni.go:84] Creating CNI manager for ""
	I0311 12:48:19.946658  747291 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:48:19.950384  747291 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0311 12:48:19.952261  747291 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0311 12:48:19.956904  747291 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0311 12:48:19.956928  747291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0311 12:48:19.995378  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0311 12:48:21.017387  747291 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.021970206s)
	I0311 12:48:21.017427  747291 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 12:48:21.017555  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:21.017634  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-109866 minikube.k8s.io/updated_at=2024_03_11T12_48_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563 minikube.k8s.io/name=addons-109866 minikube.k8s.io/primary=true
	I0311 12:48:21.215139  747291 ops.go:34] apiserver oom_adj: -16
	I0311 12:48:21.215237  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:21.716361  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:22.215948  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:22.716302  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:23.216335  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:23.715451  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:24.215904  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:24.716085  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:25.215989  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:25.715373  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:26.215853  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:26.715517  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:27.215371  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:27.715366  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:28.215657  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:28.715943  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:29.216239  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:29.715295  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:30.215350  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:30.716155  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:31.215386  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:31.715445  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:32.216159  747291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:48:32.324363  747291 kubeadm.go:1106] duration metric: took 11.306857958s to wait for elevateKubeSystemPrivileges
	W0311 12:48:32.324409  747291 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 12:48:32.324416  747291 kubeadm.go:393] duration metric: took 29.660209346s to StartCluster
	I0311 12:48:32.324433  747291 settings.go:142] acquiring lock: {Name:mk647fd5a11531f437bba0a4615b0b34bf87ec0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:32.324569  747291 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:48:32.325022  747291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/kubeconfig: {Name:mkea9792df2a23b99e9686253371e8a16054b02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:48:32.325859  747291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 12:48:32.325892  747291 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0311 12:48:32.328431  747291 out.go:177] * Verifying Kubernetes components...
	I0311 12:48:32.326164  747291 config.go:182] Loaded profile config "addons-109866": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:48:32.326175  747291 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0311 12:48:32.330538  747291 addons.go:69] Setting yakd=true in profile "addons-109866"
	I0311 12:48:32.330576  747291 addons.go:234] Setting addon yakd=true in "addons-109866"
	I0311 12:48:32.330618  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.331145  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.331296  747291 addons.go:69] Setting ingress-dns=true in profile "addons-109866"
	I0311 12:48:32.331324  747291 addons.go:234] Setting addon ingress-dns=true in "addons-109866"
	I0311 12:48:32.331361  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.331774  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.332063  747291 addons.go:69] Setting inspektor-gadget=true in profile "addons-109866"
	I0311 12:48:32.332096  747291 addons.go:234] Setting addon inspektor-gadget=true in "addons-109866"
	I0311 12:48:32.332130  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.332528  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.332706  747291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:48:32.332984  747291 addons.go:69] Setting cloud-spanner=true in profile "addons-109866"
	I0311 12:48:32.333018  747291 addons.go:234] Setting addon cloud-spanner=true in "addons-109866"
	I0311 12:48:32.333040  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.333432  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.335301  747291 addons.go:69] Setting metrics-server=true in profile "addons-109866"
	I0311 12:48:32.335342  747291 addons.go:234] Setting addon metrics-server=true in "addons-109866"
	I0311 12:48:32.335381  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.335796  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.336277  747291 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-109866"
	I0311 12:48:32.336341  747291 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-109866"
	I0311 12:48:32.336368  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.337250  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.342912  747291 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-109866"
	I0311 12:48:32.342955  747291 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-109866"
	I0311 12:48:32.342997  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.343501  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.346872  747291 addons.go:69] Setting default-storageclass=true in profile "addons-109866"
	I0311 12:48:32.346925  747291 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-109866"
	I0311 12:48:32.347280  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.353979  747291 addons.go:69] Setting registry=true in profile "addons-109866"
	I0311 12:48:32.354023  747291 addons.go:234] Setting addon registry=true in "addons-109866"
	I0311 12:48:32.354061  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.354514  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.367336  747291 addons.go:69] Setting gcp-auth=true in profile "addons-109866"
	I0311 12:48:32.367390  747291 mustload.go:65] Loading cluster: addons-109866
	I0311 12:48:32.367582  747291 config.go:182] Loaded profile config "addons-109866": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:48:32.367836  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.373150  747291 addons.go:69] Setting storage-provisioner=true in profile "addons-109866"
	I0311 12:48:32.373208  747291 addons.go:234] Setting addon storage-provisioner=true in "addons-109866"
	I0311 12:48:32.373248  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.373778  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.395899  747291 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-109866"
	I0311 12:48:32.395945  747291 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-109866"
	I0311 12:48:32.396361  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.400578  747291 addons.go:69] Setting ingress=true in profile "addons-109866"
	I0311 12:48:32.400639  747291 addons.go:234] Setting addon ingress=true in "addons-109866"
	I0311 12:48:32.400687  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.401221  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.410758  747291 addons.go:69] Setting volumesnapshots=true in profile "addons-109866"
	I0311 12:48:32.410811  747291 addons.go:234] Setting addon volumesnapshots=true in "addons-109866"
	I0311 12:48:32.410850  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.411318  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.574560  747291 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0311 12:48:32.576318  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.591791  747291 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 12:48:32.592031  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0311 12:48:32.599515  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.612816  747291 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 12:48:32.623879  747291 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 12:48:32.624189  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 12:48:32.624314  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.629345  747291 addons.go:234] Setting addon default-storageclass=true in "addons-109866"
	I0311 12:48:32.629404  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.629921  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.669931  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0311 12:48:32.671941  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0311 12:48:32.673915  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0311 12:48:32.676147  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0311 12:48:32.682154  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0311 12:48:32.603674  747291 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0311 12:48:32.603682  747291 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0311 12:48:32.603686  747291 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0311 12:48:32.603695  747291 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0311 12:48:32.603705  747291 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0311 12:48:32.664349  747291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 12:48:32.708960  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0311 12:48:32.710962  747291 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 12:48:32.710983  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 12:48:32.711071  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.712923  747291 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0311 12:48:32.712944  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0311 12:48:32.713014  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.715560  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0311 12:48:32.713862  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0311 12:48:32.713897  747291 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0311 12:48:32.713969  747291 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 12:48:32.713985  747291 out.go:177]   - Using image docker.io/registry:2.8.3
	I0311 12:48:32.723170  747291 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0311 12:48:32.720889  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0311 12:48:32.720972  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0311 12:48:32.720979  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0311 12:48:32.723695  747291 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-109866"
	I0311 12:48:32.727476  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.727660  747291 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0311 12:48:32.729340  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0311 12:48:32.729427  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.732302  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.746652  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:32.747256  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:32.751280  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0311 12:48:32.759438  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0311 12:48:32.759532  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0311 12:48:32.759668  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.775980  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.821719  747291 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0311 12:48:32.821681  747291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 12:48:32.849077  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0311 12:48:32.849512  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0311 12:48:32.850448  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.852909  747291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:48:32.870632  747291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:48:32.890072  747291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0311 12:48:32.898984  747291 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 12:48:32.899058  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0311 12:48:32.899142  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:32.903916  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:32.904027  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:32.962476  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:32.967074  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.014625  747291 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 12:48:33.014649  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 12:48:33.014715  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:33.019190  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.025456  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.073489  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.089506  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.094389  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.095139  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.101486  747291 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0311 12:48:33.096207  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.111245  747291 out.go:177]   - Using image docker.io/busybox:stable
	I0311 12:48:33.110755  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:33.113733  747291 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 12:48:33.113751  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0311 12:48:33.113817  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:33.144322  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	W0311 12:48:33.156257  747291 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0311 12:48:33.156288  747291 retry.go:31] will retry after 126.16413ms: ssh: handshake failed: EOF
	W0311 12:48:33.284234  747291 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0311 12:48:33.284259  747291 retry.go:31] will retry after 259.19574ms: ssh: handshake failed: EOF
	I0311 12:48:33.608906  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0311 12:48:33.755652  747291 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0311 12:48:33.755726  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0311 12:48:33.775683  747291 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0311 12:48:33.775715  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0311 12:48:33.780725  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 12:48:33.790831  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 12:48:33.793780  747291 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0311 12:48:33.793809  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0311 12:48:33.798299  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 12:48:33.930534  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 12:48:33.934452  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 12:48:33.961500  747291 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 12:48:33.961525  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0311 12:48:33.971594  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0311 12:48:33.971620  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0311 12:48:33.975941  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 12:48:34.053897  747291 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0311 12:48:34.053923  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0311 12:48:34.056803  747291 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0311 12:48:34.056830  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0311 12:48:34.142714  747291 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0311 12:48:34.142783  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0311 12:48:34.205077  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0311 12:48:34.205150  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0311 12:48:34.309497  747291 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 12:48:34.309574  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 12:48:34.320723  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0311 12:48:34.357291  747291 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0311 12:48:34.357372  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0311 12:48:34.448566  747291 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 12:48:34.448641  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 12:48:34.476888  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0311 12:48:34.476972  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0311 12:48:34.489502  747291 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0311 12:48:34.489577  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0311 12:48:34.508390  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0311 12:48:34.508479  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0311 12:48:34.524709  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 12:48:34.606091  747291 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0311 12:48:34.606162  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0311 12:48:34.607862  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0311 12:48:34.607923  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0311 12:48:34.642335  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0311 12:48:34.642408  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0311 12:48:34.761186  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0311 12:48:34.761258  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0311 12:48:34.785354  747291 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0311 12:48:34.785425  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0311 12:48:34.868285  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0311 12:48:34.868358  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0311 12:48:35.077973  747291 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:48:35.078057  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0311 12:48:35.122914  747291 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0311 12:48:35.122988  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0311 12:48:35.173991  747291 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0311 12:48:35.174069  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0311 12:48:35.241515  747291 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0311 12:48:35.241586  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0311 12:48:35.463337  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:48:35.486680  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0311 12:48:35.486751  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0311 12:48:35.576256  747291 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.735244891s)
	I0311 12:48:35.576732  747291 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.892304998s)
	I0311 12:48:35.576808  747291 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0311 12:48:35.578063  747291 node_ready.go:35] waiting up to 6m0s for node "addons-109866" to be "Ready" ...
	I0311 12:48:35.582997  747291 node_ready.go:49] node "addons-109866" has status "Ready":"True"
	I0311 12:48:35.583018  747291 node_ready.go:38] duration metric: took 4.695372ms for node "addons-109866" to be "Ready" ...
	I0311 12:48:35.583028  747291 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 12:48:35.585729  747291 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 12:48:35.585794  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0311 12:48:35.605373  747291 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:35.607408  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0311 12:48:35.897587  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 12:48:35.899501  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0311 12:48:35.899563  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0311 12:48:36.071543  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0311 12:48:36.071615  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0311 12:48:36.082160  747291 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-109866" context rescaled to 1 replicas
	I0311 12:48:36.317912  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0311 12:48:36.317976  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0311 12:48:36.349613  747291 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 12:48:36.349677  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0311 12:48:36.526301  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 12:48:36.952352  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.343354144s)
	I0311 12:48:36.952468  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.171652905s)
	I0311 12:48:37.612769  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:38.134176  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.343307986s)
	I0311 12:48:39.416236  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.61790455s)
	I0311 12:48:39.416430  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.4858686s)
	I0311 12:48:39.416473  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.48199751s)
	I0311 12:48:39.457975  747291 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0311 12:48:39.458068  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:39.518024  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	W0311 12:48:39.540145  747291 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0311 12:48:39.678310  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:39.954324  747291 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0311 12:48:40.039207  747291 addons.go:234] Setting addon gcp-auth=true in "addons-109866"
	I0311 12:48:40.039287  747291 host.go:66] Checking if "addons-109866" exists ...
	I0311 12:48:40.039880  747291 cli_runner.go:164] Run: docker container inspect addons-109866 --format={{.State.Status}}
	I0311 12:48:40.077160  747291 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0311 12:48:40.077223  747291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109866
	I0311 12:48:40.116875  747291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/addons-109866/id_rsa Username:docker}
	I0311 12:48:41.409911  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.433924051s)
	I0311 12:48:41.409945  747291 addons.go:470] Verifying addon ingress=true in "addons-109866"
	I0311 12:48:41.412194  747291 out.go:177] * Verifying ingress addon...
	I0311 12:48:41.410164  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.089356018s)
	I0311 12:48:41.410257  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.885399668s)
	I0311 12:48:41.410337  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.946929106s)
	I0311 12:48:41.410374  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.802900376s)
	I0311 12:48:41.410416  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.512803596s)
	I0311 12:48:41.414973  747291 addons.go:470] Verifying addon registry=true in "addons-109866"
	I0311 12:48:41.417649  747291 out.go:177] * Verifying registry addon...
	I0311 12:48:41.415702  747291 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0311 12:48:41.415734  747291 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 12:48:41.415747  747291 addons.go:470] Verifying addon metrics-server=true in "addons-109866"
	I0311 12:48:41.421663  747291 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-109866 service yakd-dashboard -n yakd-dashboard
	
	I0311 12:48:41.420341  747291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0311 12:48:41.420370  747291 retry.go:31] will retry after 260.813213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 12:48:41.425591  747291 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0311 12:48:41.425615  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:41.429372  747291 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0311 12:48:41.429437  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:41.684641  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:48:41.953871  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:41.954543  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:42.135794  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:42.431138  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:42.431409  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:42.922677  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.396279866s)
	I0311 12:48:42.922712  747291 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-109866"
	I0311 12:48:42.928165  747291 out.go:177] * Verifying csi-hostpath-driver addon...
	I0311 12:48:42.922897  747291 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.845712958s)
	I0311 12:48:42.934051  747291 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0311 12:48:42.931260  747291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0311 12:48:42.931834  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:42.938127  747291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:48:42.936898  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:42.940024  747291 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0311 12:48:42.940044  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0311 12:48:42.954435  747291 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0311 12:48:42.954462  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:43.042557  747291 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0311 12:48:43.042586  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0311 12:48:43.080276  747291 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 12:48:43.080340  747291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0311 12:48:43.102723  747291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 12:48:43.425736  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:43.430177  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:43.442330  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:43.764356  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.079619149s)
	I0311 12:48:43.926740  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:43.930398  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:43.942764  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:44.142266  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:44.199193  747291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.096390963s)
	I0311 12:48:44.202038  747291 addons.go:470] Verifying addon gcp-auth=true in "addons-109866"
	I0311 12:48:44.204299  747291 out.go:177] * Verifying gcp-auth addon...
	I0311 12:48:44.206854  747291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0311 12:48:44.210463  747291 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0311 12:48:44.210488  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:44.425203  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:44.428879  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:44.442568  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:44.714104  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:44.925306  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:44.928136  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:44.941727  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:45.211464  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:45.425750  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:45.430135  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:45.443105  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:45.712000  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:45.926025  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:45.929433  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:45.941046  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:46.210931  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:46.425580  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:46.430356  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:46.442526  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:46.612118  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:46.712321  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:46.924961  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:46.929074  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:46.942243  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:47.211354  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:47.425348  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:47.429841  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:47.441303  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:47.710328  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:47.930668  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:47.932059  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:47.942310  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:48.211171  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:48.426467  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:48.433317  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:48.482444  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:48.614365  747291 pod_ready.go:102] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"False"
	I0311 12:48:48.711540  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:48.926348  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:48.930092  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:48.942445  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:49.211997  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:49.436122  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:49.440786  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:49.450167  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:49.612962  747291 pod_ready.go:92] pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.612997  747291 pod_ready.go:81] duration metric: took 14.007577169s for pod "coredns-5dd5756b68-k6fgr" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.613027  747291 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ttll7" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.615487  747291 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ttll7" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ttll7" not found
	I0311 12:48:49.615547  747291 pod_ready.go:81] duration metric: took 2.506618ms for pod "coredns-5dd5756b68-ttll7" in "kube-system" namespace to be "Ready" ...
	E0311 12:48:49.615573  747291 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ttll7" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ttll7" not found
	I0311 12:48:49.615595  747291 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.622074  747291 pod_ready.go:92] pod "etcd-addons-109866" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.622100  747291 pod_ready.go:81] duration metric: took 6.479796ms for pod "etcd-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.622116  747291 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.628700  747291 pod_ready.go:92] pod "kube-apiserver-addons-109866" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.628723  747291 pod_ready.go:81] duration metric: took 6.570544ms for pod "kube-apiserver-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.628735  747291 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.639539  747291 pod_ready.go:92] pod "kube-controller-manager-addons-109866" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.639610  747291 pod_ready.go:81] duration metric: took 10.867116ms for pod "kube-controller-manager-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.639636  747291 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sbsmh" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.710540  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:49.810553  747291 pod_ready.go:92] pod "kube-proxy-sbsmh" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:49.810580  747291 pod_ready.go:81] duration metric: took 170.921937ms for pod "kube-proxy-sbsmh" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.810592  747291 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:49.925502  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:49.928614  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:49.941434  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:50.210766  747291 pod_ready.go:92] pod "kube-scheduler-addons-109866" in "kube-system" namespace has status "Ready":"True"
	I0311 12:48:50.210792  747291 pod_ready.go:81] duration metric: took 400.192911ms for pod "kube-scheduler-addons-109866" in "kube-system" namespace to be "Ready" ...
	I0311 12:48:50.210804  747291 pod_ready.go:38] duration metric: took 14.627765173s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 12:48:50.210840  747291 api_server.go:52] waiting for apiserver process to appear ...
	I0311 12:48:50.210934  747291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 12:48:50.213086  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:50.230256  747291 api_server.go:72] duration metric: took 17.904331374s to wait for apiserver process to appear ...
	I0311 12:48:50.230329  747291 api_server.go:88] waiting for apiserver healthz status ...
	I0311 12:48:50.230366  747291 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 12:48:50.239228  747291 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0311 12:48:50.241400  747291 api_server.go:141] control plane version: v1.28.4
	I0311 12:48:50.241431  747291 api_server.go:131] duration metric: took 11.079915ms to wait for apiserver health ...
	I0311 12:48:50.241442  747291 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 12:48:50.431800  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:50.436447  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:50.444200  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:50.449095  747291 system_pods.go:59] 18 kube-system pods found
	I0311 12:48:50.449143  747291 system_pods.go:61] "coredns-5dd5756b68-k6fgr" [e5c98387-4a6b-4a1b-9d84-0ba3de8e1798] Running
	I0311 12:48:50.449154  747291 system_pods.go:61] "csi-hostpath-attacher-0" [6ea56bf9-3a15-4722-aed9-c371a7a41885] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 12:48:50.449163  747291 system_pods.go:61] "csi-hostpath-resizer-0" [7a2ce1e9-7676-47c3-b51c-e771ca974f68] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 12:48:50.449175  747291 system_pods.go:61] "csi-hostpathplugin-ppdhc" [7a7d2e57-ae08-4a57-83cb-84db6e736c72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 12:48:50.449186  747291 system_pods.go:61] "etcd-addons-109866" [235fd778-1509-445c-b8d2-0e5a9a43192c] Running
	I0311 12:48:50.449190  747291 system_pods.go:61] "kindnet-dhnct" [41d1c11b-a3a9-478d-ad84-fe95dbd72f82] Running
	I0311 12:48:50.449194  747291 system_pods.go:61] "kube-apiserver-addons-109866" [fb04e6cf-bccf-4ccf-b7f3-6bf00a27afa8] Running
	I0311 12:48:50.449199  747291 system_pods.go:61] "kube-controller-manager-addons-109866" [3dc4f128-46e8-42ec-8621-799298eaac21] Running
	I0311 12:48:50.449209  747291 system_pods.go:61] "kube-ingress-dns-minikube" [fd805e6a-7c5e-423b-b249-5bf6eae790f1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 12:48:50.449214  747291 system_pods.go:61] "kube-proxy-sbsmh" [f7e8830b-f777-4eb9-bbdb-517eee989dd1] Running
	I0311 12:48:50.449220  747291 system_pods.go:61] "kube-scheduler-addons-109866" [c06db79c-c7d0-4935-a7f9-6642a62fc830] Running
	I0311 12:48:50.449226  747291 system_pods.go:61] "metrics-server-69cf46c98-vx8dw" [e82eba82-b9ef-4607-9492-6a41d0ca5885] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 12:48:50.449233  747291 system_pods.go:61] "nvidia-device-plugin-daemonset-jd445" [6386f2cb-771c-4f32-9490-ef0becc98007] Running
	I0311 12:48:50.449238  747291 system_pods.go:61] "registry-htvdt" [a674420f-29d1-47aa-96b0-e37549d4e224] Running
	I0311 12:48:50.449250  747291 system_pods.go:61] "registry-proxy-t89kt" [b429a937-8de5-46fc-885a-51a33440731e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 12:48:50.449258  747291 system_pods.go:61] "snapshot-controller-58dbcc7b99-glnz7" [d40bb50d-a35a-46fb-8da3-61c5565336d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 12:48:50.449269  747291 system_pods.go:61] "snapshot-controller-58dbcc7b99-m464j" [a4136872-af85-40c4-b509-a24da63d7681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 12:48:50.449274  747291 system_pods.go:61] "storage-provisioner" [2724cbee-338e-432f-953e-5d651d12e62f] Running
	I0311 12:48:50.449282  747291 system_pods.go:74] duration metric: took 207.832641ms to wait for pod list to return data ...
	I0311 12:48:50.449295  747291 default_sa.go:34] waiting for default service account to be created ...
	I0311 12:48:50.610114  747291 default_sa.go:45] found service account: "default"
	I0311 12:48:50.610139  747291 default_sa.go:55] duration metric: took 160.836961ms for default service account to be created ...
	I0311 12:48:50.610149  747291 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 12:48:50.714829  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:50.818582  747291 system_pods.go:86] 18 kube-system pods found
	I0311 12:48:50.818625  747291 system_pods.go:89] "coredns-5dd5756b68-k6fgr" [e5c98387-4a6b-4a1b-9d84-0ba3de8e1798] Running
	I0311 12:48:50.818637  747291 system_pods.go:89] "csi-hostpath-attacher-0" [6ea56bf9-3a15-4722-aed9-c371a7a41885] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 12:48:50.818645  747291 system_pods.go:89] "csi-hostpath-resizer-0" [7a2ce1e9-7676-47c3-b51c-e771ca974f68] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 12:48:50.818654  747291 system_pods.go:89] "csi-hostpathplugin-ppdhc" [7a7d2e57-ae08-4a57-83cb-84db6e736c72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 12:48:50.818660  747291 system_pods.go:89] "etcd-addons-109866" [235fd778-1509-445c-b8d2-0e5a9a43192c] Running
	I0311 12:48:50.818664  747291 system_pods.go:89] "kindnet-dhnct" [41d1c11b-a3a9-478d-ad84-fe95dbd72f82] Running
	I0311 12:48:50.818670  747291 system_pods.go:89] "kube-apiserver-addons-109866" [fb04e6cf-bccf-4ccf-b7f3-6bf00a27afa8] Running
	I0311 12:48:50.818675  747291 system_pods.go:89] "kube-controller-manager-addons-109866" [3dc4f128-46e8-42ec-8621-799298eaac21] Running
	I0311 12:48:50.818683  747291 system_pods.go:89] "kube-ingress-dns-minikube" [fd805e6a-7c5e-423b-b249-5bf6eae790f1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 12:48:50.818687  747291 system_pods.go:89] "kube-proxy-sbsmh" [f7e8830b-f777-4eb9-bbdb-517eee989dd1] Running
	I0311 12:48:50.818697  747291 system_pods.go:89] "kube-scheduler-addons-109866" [c06db79c-c7d0-4935-a7f9-6642a62fc830] Running
	I0311 12:48:50.818703  747291 system_pods.go:89] "metrics-server-69cf46c98-vx8dw" [e82eba82-b9ef-4607-9492-6a41d0ca5885] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 12:48:50.818712  747291 system_pods.go:89] "nvidia-device-plugin-daemonset-jd445" [6386f2cb-771c-4f32-9490-ef0becc98007] Running
	I0311 12:48:50.818717  747291 system_pods.go:89] "registry-htvdt" [a674420f-29d1-47aa-96b0-e37549d4e224] Running
	I0311 12:48:50.818722  747291 system_pods.go:89] "registry-proxy-t89kt" [b429a937-8de5-46fc-885a-51a33440731e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 12:48:50.818729  747291 system_pods.go:89] "snapshot-controller-58dbcc7b99-glnz7" [d40bb50d-a35a-46fb-8da3-61c5565336d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 12:48:50.818735  747291 system_pods.go:89] "snapshot-controller-58dbcc7b99-m464j" [a4136872-af85-40c4-b509-a24da63d7681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 12:48:50.818739  747291 system_pods.go:89] "storage-provisioner" [2724cbee-338e-432f-953e-5d651d12e62f] Running
	I0311 12:48:50.818749  747291 system_pods.go:126] duration metric: took 208.592768ms to wait for k8s-apps to be running ...
	I0311 12:48:50.818757  747291 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 12:48:50.818816  747291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 12:48:50.838658  747291 system_svc.go:56] duration metric: took 19.891216ms WaitForService to wait for kubelet
	I0311 12:48:50.838739  747291 kubeadm.go:576] duration metric: took 18.51281897s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 12:48:50.838867  747291 node_conditions.go:102] verifying NodePressure condition ...
	I0311 12:48:50.925399  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:50.928503  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:50.942755  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:51.010638  747291 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 12:48:51.010673  747291 node_conditions.go:123] node cpu capacity is 2
	I0311 12:48:51.010687  747291 node_conditions.go:105] duration metric: took 171.787794ms to run NodePressure ...
	I0311 12:48:51.010701  747291 start.go:240] waiting for startup goroutines ...
	I0311 12:48:51.211767  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:51.430218  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:51.431246  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:51.442335  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:51.711894  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:51.928498  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:51.931688  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:51.943101  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:52.210728  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:52.425478  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:52.428586  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:52.442244  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:52.711170  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:52.933400  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:52.938854  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:52.953080  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:53.211496  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:53.426270  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:53.443505  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:53.444853  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:53.711091  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:53.925196  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:53.928281  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:53.942155  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:54.211087  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:54.426252  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:54.430618  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:54.442771  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:54.712317  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:54.926077  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:54.929504  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:54.942518  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:55.211147  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:55.425616  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:55.429904  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:55.441880  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:55.712240  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:55.930243  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:55.936378  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:55.944817  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:56.210688  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:56.426663  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:56.429971  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:56.444255  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:56.711349  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:56.925567  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:56.929377  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:56.942434  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:57.211050  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:57.432277  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:57.447798  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:57.450899  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:57.710884  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:57.927579  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:57.930101  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:57.941340  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:58.210591  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:58.426177  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:58.431146  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:48:58.445636  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:58.711144  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:58.927406  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:58.929970  747291 kapi.go:107] duration metric: took 17.509627272s to wait for kubernetes.io/minikube-addons=registry ...
	I0311 12:48:58.941572  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:59.210384  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:59.425325  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:59.443461  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:48:59.710858  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:48:59.925797  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:48:59.942603  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:00.212227  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:00.425836  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:00.449683  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:00.711619  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:00.926227  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:00.951670  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:01.210816  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:01.427002  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:01.442851  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:01.716473  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:01.931285  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:01.943636  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:02.211424  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:02.425744  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:02.443183  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:02.711144  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:02.924705  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:02.943790  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:03.210323  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:03.425043  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:03.441304  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:03.710901  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:03.925636  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:03.942930  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:04.210390  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:04.427471  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:04.441922  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:04.710538  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:04.925331  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:04.942385  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:05.210659  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:05.425156  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:05.441573  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:05.711793  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:05.933419  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:05.941759  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:06.210908  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:06.426561  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:06.444056  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:06.712882  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:06.926327  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:06.945349  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:07.211543  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:07.431387  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:07.449057  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:07.713557  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:07.925155  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:07.942736  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:08.210723  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:08.425487  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:08.442008  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:08.711732  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:08.925191  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:08.942317  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:09.211015  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:09.425724  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:09.442328  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:09.711445  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:09.925191  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:09.941676  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:10.210673  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:10.425196  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:10.442426  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:10.711470  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:10.925247  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:10.941885  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:11.210815  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:11.425994  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:11.441998  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:11.710273  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:11.926759  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:11.942539  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:12.212841  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:12.427587  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:12.441897  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:12.711367  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:12.926121  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:12.944002  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:13.211229  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:13.425303  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:13.442278  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:13.711294  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:13.924806  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:13.942202  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:14.210885  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:14.425673  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:14.443346  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:14.712025  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:14.925803  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:14.941752  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:15.210681  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:15.425276  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:15.442865  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:15.711106  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:15.925243  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:15.943506  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:16.211746  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:16.425441  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:16.442275  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:16.712464  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:16.925853  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:16.941817  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:17.213616  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:17.431343  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:17.442870  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:17.711515  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:17.926278  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:17.943184  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:18.211711  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:18.425339  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:18.442588  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:18.711538  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:18.925531  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:18.942468  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:19.211926  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:19.425919  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:19.442034  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:19.711228  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:19.925564  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:19.942284  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:20.211225  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:20.425206  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:20.442365  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:20.710821  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:20.925515  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:20.942511  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:21.213176  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:21.425103  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:21.441467  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:21.712106  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:21.934904  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:21.946227  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:22.211688  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:22.426022  747291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:49:22.441442  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:22.711532  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:22.925682  747291 kapi.go:107] duration metric: took 41.509976827s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0311 12:49:22.945022  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:23.211064  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:23.442033  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:23.711726  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:23.942956  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:24.210920  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:24.441526  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:24.711083  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:24.941735  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:25.218042  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:25.442186  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:25.710993  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:25.941764  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:26.210364  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:26.441955  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:26.711542  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:26.946166  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:27.217948  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:27.441573  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:27.712773  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:27.942066  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:28.211200  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:28.441942  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:49:28.712247  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:28.942649  747291 kapi.go:107] duration metric: took 46.011387477s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0311 12:49:29.210249  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:29.711049  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:30.211577  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:30.711465  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:31.210439  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:31.710397  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:32.211258  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:32.710842  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:33.211754  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:33.712569  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:34.210559  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:34.710669  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:35.210902  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:35.711595  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:36.211164  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:36.711345  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:37.210632  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:37.710891  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:38.210841  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:38.711205  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:39.211498  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:39.710493  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:40.211167  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:40.712228  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:41.210964  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:41.711563  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:42.211864  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:42.715180  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:43.213394  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:43.710388  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:44.210487  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:44.711194  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:45.211618  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:45.710885  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:46.210517  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:46.711248  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:47.210515  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:47.710643  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:48.211650  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:48.710591  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:49.210253  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:49.711955  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:50.211480  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:50.711078  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:51.211292  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:51.710720  747291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:49:52.210316  747291 kapi.go:107] duration metric: took 1m8.003457134s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0311 12:49:52.211916  747291 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-109866 cluster.
	I0311 12:49:52.214033  747291 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0311 12:49:52.215566  747291 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0311 12:49:52.217283  747291 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0311 12:49:52.218893  747291 addons.go:505] duration metric: took 1m19.892711s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0311 12:49:52.218941  747291 start.go:245] waiting for cluster config update ...
	I0311 12:49:52.218961  747291 start.go:254] writing updated cluster config ...
	I0311 12:49:52.219271  747291 ssh_runner.go:195] Run: rm -f paused
	I0311 12:49:52.530893  747291 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 12:49:52.532907  747291 out.go:177] * Done! kubectl is now configured to use "addons-109866" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	9fddd20da6d14       72903ddab4e04       14 seconds ago       Exited              gadget                       4                   9e0611893c7a0       gadget-ggm8x
	28ed61146e2b9       fc9db2894f4e4       32 seconds ago       Exited              helper-pod                   0                   aac1b2a0bbc15       helper-pod-delete-pvc-28835a80-bbb1-42b9-a246-925c8b10c615
	e86ca9e0a4002       fc9db2894f4e4       39 seconds ago       Exited              helper-pod                   0                   c5c39690afb39       helper-pod-create-pvc-28835a80-bbb1-42b9-a246-925c8b10c615
	d6ee104aeb6b9       1499ed4fbd0aa       47 seconds ago       Exited              minikube-ingress-dns         4                   cc3cc6645d403       kube-ingress-dns-minikube
	849cf95bb8776       bafe72500920c       About a minute ago   Running             gcp-auth                     0                   c36c69d1994c7       gcp-auth-5f6b4f85fd-5ltww
	0ff4d333d12e0       6505abd14fdf8       About a minute ago   Running             controller                   0                   a3570b62cd2bb       ingress-nginx-controller-76dc478dd8-tln2r
	232b2bf74420e       1a024e390dd05       About a minute ago   Exited              patch                        1                   4e6a266ad4556       ingress-nginx-admission-patch-tq8qj
	d21561d10b3f2       1a024e390dd05       About a minute ago   Exited              create                       0                   149ef766c4f67       ingress-nginx-admission-create-2lcwn
	042616b30d8e0       20e3f2db01e81       About a minute ago   Running             yakd                         0                   10703483ce7b9       yakd-dashboard-9947fc6bf-cg9cw
	0c9abe5f0ef0c       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   f5081c5f44a57       snapshot-controller-58dbcc7b99-glnz7
	d325bdb344e77       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   56b6a1de92aa2       snapshot-controller-58dbcc7b99-m464j
	9725ec16da0a7       5cd7991a1c728       2 minutes ago        Running             metrics-server               0                   ca251c58036e0       metrics-server-69cf46c98-vx8dw
	6da5f515d1514       41340d5d57adb       2 minutes ago        Running             cloud-spanner-emulator       0                   7292192102de8       cloud-spanner-emulator-6548d5df46-kt592
	4deddb1b3e244       97e04611ad434       2 minutes ago        Running             coredns                      0                   0f28cf1e914c7       coredns-5dd5756b68-k6fgr
	3cbaca7299409       ba04bb24b9575       2 minutes ago        Running             storage-provisioner          0                   f0f892b4050b6       storage-provisioner
	75830bc702c7c       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                  0                   1daf4c11e983d       kindnet-dhnct
	2260a3b94348b       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                   0                   c9b569705eb58       kube-proxy-sbsmh
	d253b8b91fc0b       05c284c929889       2 minutes ago        Running             kube-scheduler               0                   d046b3483e8c7       kube-scheduler-addons-109866
	ce5f4d541da98       9961cbceaf234       2 minutes ago        Running             kube-controller-manager      0                   8848dd3c6ee03       kube-controller-manager-addons-109866
	a6165e0945dce       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver               0                   fcb97d51c231f       kube-apiserver-addons-109866
	fcca5b6d52b25       9cdd6470f48c8       2 minutes ago        Running             etcd                         0                   211516c8a58d4       etcd-addons-109866
	
	
	==> containerd <==
	Mar 11 12:50:53 addons-109866 containerd[757]: time="2024-03-11T12:50:53.220786088Z" level=error msg="ContainerStatus for \"c0b3a26e0542e1b517c655bed8b04797669ba01d547a4d40d3da54467d1ab7c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0b3a26e0542e1b517c655bed8b04797669ba01d547a4d40d3da54467d1ab7c3\": not found"
	Mar 11 12:50:53 addons-109866 containerd[757]: time="2024-03-11T12:50:53.221226005Z" level=error msg="ContainerStatus for \"cb6c09b6aa310ce524be3a6ce9b463b38278585da08194e377afc1adb4ccb0f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb6c09b6aa310ce524be3a6ce9b463b38278585da08194e377afc1adb4ccb0f8\": not found"
	Mar 11 12:50:53 addons-109866 containerd[757]: time="2024-03-11T12:50:53.221839140Z" level=error msg="ContainerStatus for \"286a91057bbc99f6bcbb39016db7e4540ae50abb8c280063053ef09a26fa4802\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"286a91057bbc99f6bcbb39016db7e4540ae50abb8c280063053ef09a26fa4802\": not found"
	Mar 11 12:50:53 addons-109866 containerd[757]: time="2024-03-11T12:50:53.222279894Z" level=error msg="ContainerStatus for \"80821743bb6dc4f85c534b75712ef4056489dfc9776b949acf39d5cfd392510a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80821743bb6dc4f85c534b75712ef4056489dfc9776b949acf39d5cfd392510a\": not found"
	Mar 11 12:50:53 addons-109866 containerd[757]: time="2024-03-11T12:50:53.222782104Z" level=error msg="ContainerStatus for \"ac15bcec8e7d623927b5919fed21446889930036018e373f92aa6df049c505db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac15bcec8e7d623927b5919fed21446889930036018e373f92aa6df049c505db\": not found"
	Mar 11 12:50:53 addons-109866 containerd[757]: time="2024-03-11T12:50:53.223422858Z" level=error msg="ContainerStatus for \"70795b4b2a5e7a3a56b15575511b1daddd25eac52baa696472b8ef06f7babe22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70795b4b2a5e7a3a56b15575511b1daddd25eac52baa696472b8ef06f7babe22\": not found"
	Mar 11 12:50:53 addons-109866 containerd[757]: time="2024-03-11T12:50:53.223874607Z" level=error msg="ContainerStatus for \"c0b3a26e0542e1b517c655bed8b04797669ba01d547a4d40d3da54467d1ab7c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0b3a26e0542e1b517c655bed8b04797669ba01d547a4d40d3da54467d1ab7c3\": not found"
	Mar 11 12:50:53 addons-109866 containerd[757]: time="2024-03-11T12:50:53.224324436Z" level=error msg="ContainerStatus for \"cb6c09b6aa310ce524be3a6ce9b463b38278585da08194e377afc1adb4ccb0f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb6c09b6aa310ce524be3a6ce9b463b38278585da08194e377afc1adb4ccb0f8\": not found"
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.535277892Z" level=info msg="Kill container \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\""
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.573979132Z" level=info msg="shim disconnected" id=dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.574255217Z" level=warning msg="cleaning up after shim disconnected" id=dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451 namespace=k8s.io
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.574342954Z" level=info msg="cleaning up dead shim"
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.584888252Z" level=warning msg="cleanup warnings time=\"2024-03-11T12:50:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8790 runtime=io.containerd.runc.v2\n"
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.587753854Z" level=info msg="StopContainer for \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\" returns successfully"
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.588806258Z" level=info msg="StopPodSandbox for \"b1b63b2c27525abe87419a215a345c0ee544fc1406111699d8fe0dd996afc3c0\""
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.589127029Z" level=info msg="Container to stop \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.649818193Z" level=info msg="shim disconnected" id=b1b63b2c27525abe87419a215a345c0ee544fc1406111699d8fe0dd996afc3c0
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.650190205Z" level=warning msg="cleaning up after shim disconnected" id=b1b63b2c27525abe87419a215a345c0ee544fc1406111699d8fe0dd996afc3c0 namespace=k8s.io
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.650298045Z" level=info msg="cleaning up dead shim"
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.660560865Z" level=warning msg="cleanup warnings time=\"2024-03-11T12:50:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8822 runtime=io.containerd.runc.v2\n"
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.688656926Z" level=info msg="TearDown network for sandbox \"b1b63b2c27525abe87419a215a345c0ee544fc1406111699d8fe0dd996afc3c0\" successfully"
	Mar 11 12:50:58 addons-109866 containerd[757]: time="2024-03-11T12:50:58.688834493Z" level=info msg="StopPodSandbox for \"b1b63b2c27525abe87419a215a345c0ee544fc1406111699d8fe0dd996afc3c0\" returns successfully"
	Mar 11 12:50:59 addons-109866 containerd[757]: time="2024-03-11T12:50:59.143840908Z" level=info msg="RemoveContainer for \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\""
	Mar 11 12:50:59 addons-109866 containerd[757]: time="2024-03-11T12:50:59.152331097Z" level=info msg="RemoveContainer for \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\" returns successfully"
	Mar 11 12:50:59 addons-109866 containerd[757]: time="2024-03-11T12:50:59.155634786Z" level=error msg="ContainerStatus for \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\": not found"
	
	
	==> coredns [4deddb1b3e244037464605871f9ffd92cde5acd350edcb7658058f9b4bbfdfc7] <==
	[INFO] 10.244.0.5:58011 - 20519 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002513527s
	[INFO] 10.244.0.5:50699 - 52489 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000172865s
	[INFO] 10.244.0.5:50699 - 11050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128467s
	[INFO] 10.244.0.5:51843 - 15743 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171117s
	[INFO] 10.244.0.5:51843 - 123 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135294s
	[INFO] 10.244.0.5:59091 - 58693 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072918s
	[INFO] 10.244.0.5:59091 - 28487 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000368s
	[INFO] 10.244.0.5:58027 - 51168 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088467s
	[INFO] 10.244.0.5:58027 - 8421 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060012s
	[INFO] 10.244.0.5:40631 - 47265 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001752194s
	[INFO] 10.244.0.5:40631 - 16546 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001592827s
	[INFO] 10.244.0.5:33101 - 55747 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000066461s
	[INFO] 10.244.0.5:33101 - 11998 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072459s
	[INFO] 10.244.0.20:60982 - 56325 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001782972s
	[INFO] 10.244.0.20:50050 - 37931 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001756534s
	[INFO] 10.244.0.20:48807 - 25663 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101817s
	[INFO] 10.244.0.20:39214 - 29750 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094227s
	[INFO] 10.244.0.20:45216 - 18834 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088918s
	[INFO] 10.244.0.20:54165 - 53382 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088196s
	[INFO] 10.244.0.20:54068 - 19032 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002149025s
	[INFO] 10.244.0.20:57863 - 4229 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00223055s
	[INFO] 10.244.0.20:58667 - 38203 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000653018s
	[INFO] 10.244.0.20:44494 - 63699 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000672102s
	[INFO] 10.244.0.21:51257 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000182062s
	[INFO] 10.244.0.21:43052 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099897s
	
	
	==> describe nodes <==
	Name:               addons-109866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-109866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=addons-109866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T12_48_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-109866
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 12:48:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-109866
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 12:50:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 12:50:53 +0000   Mon, 11 Mar 2024 12:48:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 12:50:53 +0000   Mon, 11 Mar 2024 12:48:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 12:50:53 +0000   Mon, 11 Mar 2024 12:48:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 12:50:53 +0000   Mon, 11 Mar 2024 12:48:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-109866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bd5453af7c34cb1b04f96a160b0fb4f
	  System UUID:                8e525322-b209-4cf4-bc23-f3ded0274e04
	  Boot ID:                    26506771-5b0e-4b52-8e79-b1a5a7798867
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-kt592      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  gadget                      gadget-ggm8x                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  gcp-auth                    gcp-auth-5f6b4f85fd-5ltww                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-tln2r    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         2m19s
	  kube-system                 coredns-5dd5756b68-k6fgr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m27s
	  kube-system                 etcd-addons-109866                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m40s
	  kube-system                 kindnet-dhnct                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m28s
	  kube-system                 kube-apiserver-addons-109866                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-controller-manager-addons-109866        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-proxy-sbsmh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-scheduler-addons-109866                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 metrics-server-69cf46c98-vx8dw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m22s
	  kube-system                 snapshot-controller-58dbcc7b99-glnz7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 snapshot-controller-58dbcc7b99-m464j         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-cg9cw               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m50s (x8 over 2m50s)  kubelet          Node addons-109866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x8 over 2m50s)  kubelet          Node addons-109866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x7 over 2m50s)  kubelet          Node addons-109866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m40s                  kubelet          Node addons-109866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m40s                  kubelet          Node addons-109866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m40s                  kubelet          Node addons-109866 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m40s                  kubelet          Node addons-109866 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m30s                  kubelet          Node addons-109866 status is now: NodeReady
	  Normal  RegisteredNode           2m29s                  node-controller  Node addons-109866 event: Registered Node addons-109866 in Controller
	
	
	==> dmesg <==
	[  +0.001009] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=0000000083809770
	[  +0.001111] FS-Cache: N-key=[8] '603c5c0100000000'
	[  +0.003066] FS-Cache: Duplicate cookie detected
	[  +0.000736] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=00000000bc2d205d
	[  +0.001189] FS-Cache: O-key=[8] '603c5c0100000000'
	[  +0.000722] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=0000000051073a19
	[  +0.001067] FS-Cache: N-key=[8] '603c5c0100000000'
	[  +2.719188] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001095] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=0000000013ff9938
	[  +0.001167] FS-Cache: O-key=[8] '5f3c5c0100000000'
	[  +0.000709] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=000000008609c792
	[  +0.001120] FS-Cache: N-key=[8] '5f3c5c0100000000'
	[  +0.365130] FS-Cache: Duplicate cookie detected
	[  +0.000776] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001043] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=00000000c4ee4e31
	[  +0.001133] FS-Cache: O-key=[8] '653c5c0100000000'
	[  +0.000748] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000983] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=0000000083809770
	[  +0.001103] FS-Cache: N-key=[8] '653c5c0100000000'
	[Mar11 11:51] hrtimer: interrupt took 2085213 ns
	[Mar11 12:42] systemd-journald[222]: Failed to send WATCHDOG=1 notification message: Connection refused
	
	
	==> etcd [fcca5b6d52b255689a40c57790767057efec952fd6e465b7474d6e3b65546cb1] <==
	{"level":"info","ts":"2024-03-11T12:48:11.548578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-11T12:48:11.548679Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-11T12:48:11.55041Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T12:48:11.550508Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-11T12:48:11.550521Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-11T12:48:11.556278Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T12:48:11.556321Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T12:48:12.420695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-11T12:48:12.420808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-11T12:48:12.420912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-11T12:48:12.42097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T12:48:12.421003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-11T12:48:12.42107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-11T12:48:12.421108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-11T12:48:12.424871Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T12:48:12.427979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-109866 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T12:48:12.428053Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T12:48:12.42881Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T12:48:12.428923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T12:48:12.428992Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T12:48:12.429589Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T12:48:12.42971Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T12:48:12.4301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T12:48:12.430159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T12:48:12.441082Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [849cf95bb87766f0ca68d6a8300e6b17b46db9267afde731140ce9a2396230a5] <==
	2024/03/11 12:49:51 GCP Auth Webhook started!
	2024/03/11 12:50:03 Ready to marshal response ...
	2024/03/11 12:50:03 Ready to write response ...
	2024/03/11 12:50:11 Ready to marshal response ...
	2024/03/11 12:50:11 Ready to write response ...
	2024/03/11 12:50:18 Ready to marshal response ...
	2024/03/11 12:50:18 Ready to write response ...
	2024/03/11 12:50:19 Ready to marshal response ...
	2024/03/11 12:50:19 Ready to write response ...
	2024/03/11 12:50:27 Ready to marshal response ...
	2024/03/11 12:50:27 Ready to write response ...
	2024/03/11 12:50:42 Ready to marshal response ...
	2024/03/11 12:50:42 Ready to write response ...
	
	
	==> kernel <==
	 12:51:00 up  4:33,  0 users,  load average: 1.30, 2.38, 2.79
	Linux addons-109866 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [75830bc702c7c40322b8ece18238ee3ac83b25eb4f1886f166f659962c8ea1cb] <==
	I0311 12:48:55.842112       1 main.go:227] handling current node
	I0311 12:49:05.852607       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:49:05.852635       1 main.go:227] handling current node
	I0311 12:49:15.857792       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:49:15.857817       1 main.go:227] handling current node
	I0311 12:49:25.869414       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:49:25.869444       1 main.go:227] handling current node
	I0311 12:49:35.917362       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:49:35.917391       1 main.go:227] handling current node
	I0311 12:49:45.929066       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:49:45.929094       1 main.go:227] handling current node
	I0311 12:49:55.942049       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:49:55.942080       1 main.go:227] handling current node
	I0311 12:50:05.952394       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:05.952424       1 main.go:227] handling current node
	I0311 12:50:15.956953       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:15.957626       1 main.go:227] handling current node
	I0311 12:50:25.970071       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:25.970115       1 main.go:227] handling current node
	I0311 12:50:35.974566       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:35.974595       1 main.go:227] handling current node
	I0311 12:50:45.987317       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:45.987362       1 main.go:227] handling current node
	I0311 12:50:56.003434       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:50:56.003467       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a6165e0945dcecdd3546773eb735a8a1061006f886d2c105db84c01fe241e0ca] <==
	W0311 12:49:02.531985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 12:49:02.532052       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0311 12:49:02.532333       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.165.135:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.165.135:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.165.135:443: connect: connection refused
	I0311 12:49:02.532457       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0311 12:49:02.534598       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.165.135:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.165.135:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.165.135:443: connect: connection refused
	W0311 12:49:03.532781       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 12:49:03.532837       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0311 12:49:03.532791       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 12:49:03.533095       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 12:49:03.533123       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 12:49:03.535110       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0311 12:49:07.546123       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.165.135:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.165.135:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
	W0311 12:49:07.546258       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 12:49:07.546298       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0311 12:49:07.676253       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0311 12:49:07.698948       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 12:49:07.706578       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 12:49:16.726627       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0311 12:50:06.987502       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400968ff50), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400703ee10), ResponseWriter:(*httpsnoop.rw)(0x400703ee10), Flusher:(*httpsnoop.rw)(0x400703ee10), CloseNotifier:(*httpsnoop.rw)(0x400703ee10), Pusher:(*httpsnoop.rw)(0x400703ee10)}}, encoder:(*versioning.codec)(0x40044d5d60), memAllocator:(*runtime.Allocator)(0x4006d098d8)})
	I0311 12:50:16.725725       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 12:50:20.218434       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0311 12:50:43.733593       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [ce5f4d541da9803f2e592f5cbc86244d8503fcb630bbb1c3eb41696c53a2d65b] <==
	I0311 12:49:19.869475       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0311 12:49:19.869621       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0311 12:49:20.104775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="27.566427ms"
	I0311 12:49:20.104884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="71.934µs"
	I0311 12:49:22.682669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="57.034µs"
	I0311 12:49:33.428236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="27.96077ms"
	I0311 12:49:33.428633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="67.75µs"
	I0311 12:49:49.019648       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0311 12:49:49.023547       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0311 12:49:49.064290       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0311 12:49:49.064471       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0311 12:49:51.828380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="14.995526ms"
	I0311 12:49:51.828626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="200.753µs"
	I0311 12:49:52.849046       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 12:50:01.734510       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 12:50:07.798964       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="10.019µs"
	I0311 12:50:10.089776       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 12:50:18.768848       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0311 12:50:18.991089       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 12:50:22.425878       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 12:50:28.473839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="19.963µs"
	I0311 12:50:31.735573       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 12:50:42.210987       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 12:50:52.271008       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0311 12:50:52.361307       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	
	
	==> kube-proxy [2260a3b94348bcef7e2bfc11cf30d679d9aca3f41c2c21f9f32f71246a44aaf6] <==
	I0311 12:48:33.865597       1 server_others.go:69] "Using iptables proxy"
	I0311 12:48:33.882941       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0311 12:48:33.909681       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0311 12:48:33.911495       1 server_others.go:152] "Using iptables Proxier"
	I0311 12:48:33.911523       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0311 12:48:33.911529       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0311 12:48:33.911552       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 12:48:33.911744       1 server.go:846] "Version info" version="v1.28.4"
	I0311 12:48:33.911754       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 12:48:33.913072       1 config.go:188] "Starting service config controller"
	I0311 12:48:33.913084       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 12:48:33.913101       1 config.go:97] "Starting endpoint slice config controller"
	I0311 12:48:33.913106       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 12:48:33.913446       1 config.go:315] "Starting node config controller"
	I0311 12:48:33.913452       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 12:48:34.013674       1 shared_informer.go:318] Caches are synced for node config
	I0311 12:48:34.018057       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 12:48:34.018108       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d253b8b91fc0b2588014a884cac6639ef7ef50c2ad7f93a5b5da851bdb34e760] <==
	W0311 12:48:16.956293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 12:48:16.956310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 12:48:16.956388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 12:48:16.956417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 12:48:16.965257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 12:48:16.965299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 12:48:16.965365       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 12:48:16.965394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 12:48:16.965453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 12:48:16.965485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 12:48:17.803370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 12:48:17.803467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 12:48:17.829322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 12:48:17.829661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 12:48:17.928145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 12:48:17.928215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 12:48:17.960317       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 12:48:17.960356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 12:48:17.971198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 12:48:17.971388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 12:48:18.019017       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 12:48:18.019231       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 12:48:18.091221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 12:48:18.091409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0311 12:48:20.137610       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.222479    1478 scope.go:117] "RemoveContainer" containerID="ac15bcec8e7d623927b5919fed21446889930036018e373f92aa6df049c505db"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.223039    1478 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac15bcec8e7d623927b5919fed21446889930036018e373f92aa6df049c505db"} err="failed to get container status \"ac15bcec8e7d623927b5919fed21446889930036018e373f92aa6df049c505db\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac15bcec8e7d623927b5919fed21446889930036018e373f92aa6df049c505db\": not found"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.223078    1478 scope.go:117] "RemoveContainer" containerID="70795b4b2a5e7a3a56b15575511b1daddd25eac52baa696472b8ef06f7babe22"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.223613    1478 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70795b4b2a5e7a3a56b15575511b1daddd25eac52baa696472b8ef06f7babe22"} err="failed to get container status \"70795b4b2a5e7a3a56b15575511b1daddd25eac52baa696472b8ef06f7babe22\": rpc error: code = NotFound desc = an error occurred when try to find container \"70795b4b2a5e7a3a56b15575511b1daddd25eac52baa696472b8ef06f7babe22\": not found"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.223639    1478 scope.go:117] "RemoveContainer" containerID="c0b3a26e0542e1b517c655bed8b04797669ba01d547a4d40d3da54467d1ab7c3"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.224032    1478 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0b3a26e0542e1b517c655bed8b04797669ba01d547a4d40d3da54467d1ab7c3"} err="failed to get container status \"c0b3a26e0542e1b517c655bed8b04797669ba01d547a4d40d3da54467d1ab7c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0b3a26e0542e1b517c655bed8b04797669ba01d547a4d40d3da54467d1ab7c3\": not found"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.224100    1478 scope.go:117] "RemoveContainer" containerID="cb6c09b6aa310ce524be3a6ce9b463b38278585da08194e377afc1adb4ccb0f8"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.224578    1478 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb6c09b6aa310ce524be3a6ce9b463b38278585da08194e377afc1adb4ccb0f8"} err="failed to get container status \"cb6c09b6aa310ce524be3a6ce9b463b38278585da08194e377afc1adb4ccb0f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb6c09b6aa310ce524be3a6ce9b463b38278585da08194e377afc1adb4ccb0f8\": not found"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.346725    1478 csi_plugin.go:178] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin hostpath.csi.k8s.io
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.956732    1478 scope.go:117] "RemoveContainer" containerID="d6ee104aeb6b95631ee73e236df55ea240ab42af5110ff3bf4bdac283a373522"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: E0311 12:50:53.957067    1478 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(fd805e6a-7c5e-423b-b249-5bf6eae790f1)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="fd805e6a-7c5e-423b-b249-5bf6eae790f1"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.958902    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6ea56bf9-3a15-4722-aed9-c371a7a41885" path="/var/lib/kubelet/pods/6ea56bf9-3a15-4722-aed9-c371a7a41885/volumes"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.959304    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7a2ce1e9-7676-47c3-b51c-e771ca974f68" path="/var/lib/kubelet/pods/7a2ce1e9-7676-47c3-b51c-e771ca974f68/volumes"
	Mar 11 12:50:53 addons-109866 kubelet[1478]: I0311 12:50:53.959658    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7a7d2e57-ae08-4a57-83cb-84db6e736c72" path="/var/lib/kubelet/pods/7a7d2e57-ae08-4a57-83cb-84db6e736c72/volumes"
	Mar 11 12:50:58 addons-109866 kubelet[1478]: I0311 12:50:58.792948    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4f447d4-b7e6-418e-9eb2-15be43d9f857-config-volume\") pod \"e4f447d4-b7e6-418e-9eb2-15be43d9f857\" (UID: \"e4f447d4-b7e6-418e-9eb2-15be43d9f857\") "
	Mar 11 12:50:58 addons-109866 kubelet[1478]: I0311 12:50:58.793888    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4f447d4-b7e6-418e-9eb2-15be43d9f857-config-volume" (OuterVolumeSpecName: "config-volume") pod "e4f447d4-b7e6-418e-9eb2-15be43d9f857" (UID: "e4f447d4-b7e6-418e-9eb2-15be43d9f857"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 11 12:50:58 addons-109866 kubelet[1478]: I0311 12:50:58.794189    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq5rd\" (UniqueName: \"kubernetes.io/projected/e4f447d4-b7e6-418e-9eb2-15be43d9f857-kube-api-access-pq5rd\") pod \"e4f447d4-b7e6-418e-9eb2-15be43d9f857\" (UID: \"e4f447d4-b7e6-418e-9eb2-15be43d9f857\") "
	Mar 11 12:50:58 addons-109866 kubelet[1478]: I0311 12:50:58.794694    1478 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4f447d4-b7e6-418e-9eb2-15be43d9f857-config-volume\") on node \"addons-109866\" DevicePath \"\""
	Mar 11 12:50:58 addons-109866 kubelet[1478]: I0311 12:50:58.798700    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f447d4-b7e6-418e-9eb2-15be43d9f857-kube-api-access-pq5rd" (OuterVolumeSpecName: "kube-api-access-pq5rd") pod "e4f447d4-b7e6-418e-9eb2-15be43d9f857" (UID: "e4f447d4-b7e6-418e-9eb2-15be43d9f857"). InnerVolumeSpecName "kube-api-access-pq5rd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 12:50:58 addons-109866 kubelet[1478]: I0311 12:50:58.895896    1478 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pq5rd\" (UniqueName: \"kubernetes.io/projected/e4f447d4-b7e6-418e-9eb2-15be43d9f857-kube-api-access-pq5rd\") on node \"addons-109866\" DevicePath \"\""
	Mar 11 12:50:59 addons-109866 kubelet[1478]: I0311 12:50:59.127125    1478 scope.go:117] "RemoveContainer" containerID="dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451"
	Mar 11 12:50:59 addons-109866 kubelet[1478]: I0311 12:50:59.153927    1478 scope.go:117] "RemoveContainer" containerID="dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451"
	Mar 11 12:50:59 addons-109866 kubelet[1478]: E0311 12:50:59.156091    1478 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\": not found" containerID="dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451"
	Mar 11 12:50:59 addons-109866 kubelet[1478]: I0311 12:50:59.156151    1478 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451"} err="failed to get container status \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc047de3010f0d6aaf88e7b163e5f48d04646bb9b12a16b2ea06b78f6da52451\": not found"
	Mar 11 12:50:59 addons-109866 kubelet[1478]: I0311 12:50:59.960261    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e4f447d4-b7e6-418e-9eb2-15be43d9f857" path="/var/lib/kubelet/pods/e4f447d4-b7e6-418e-9eb2-15be43d9f857/volumes"
	
	
	==> storage-provisioner [3cbaca72994090d255ccee0e7e99140a0718028d4c82d05eba0fa1f2230e6ef8] <==
	I0311 12:48:39.024053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 12:48:39.111181       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 12:48:39.111291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 12:48:39.220507       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 12:48:39.220688       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-109866_b52c2dbf-066e-42f7-a420-bc0218bbc508!
	I0311 12:48:39.221713       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8f346508-6b10-4a09-bec9-d1b8b9d85914", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-109866_b52c2dbf-066e-42f7-a420-bc0218bbc508 became leader
	I0311 12:48:39.528884       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-109866_b52c2dbf-066e-42f7-a420-bc0218bbc508!
	E0311 12:50:27.693593       1 controller.go:1050] claim "28835a80-bbb1-42b9-a246-925c8b10c615" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-109866 -n addons-109866
helpers_test.go:261: (dbg) Run:  kubectl --context addons-109866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-2lcwn ingress-nginx-admission-patch-tq8qj
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-109866 describe pod ingress-nginx-admission-create-2lcwn ingress-nginx-admission-patch-tq8qj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-109866 describe pod ingress-nginx-admission-create-2lcwn ingress-nginx-admission-patch-tq8qj: exit status 1 (85.077144ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2lcwn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tq8qj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-109866 describe pod ingress-nginx-admission-create-2lcwn ingress-nginx-admission-patch-tq8qj: exit status 1
--- FAIL: TestAddons/parallel/CSI (69.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image load --daemon gcr.io/google-containers/addon-resizer:functional-891062 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 image load --daemon gcr.io/google-containers/addon-resizer:functional-891062 --alsologtostderr: (4.255059169s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-891062" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image load --daemon gcr.io/google-containers/addon-resizer:functional-891062 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 image load --daemon gcr.io/google-containers/addon-resizer:functional-891062 --alsologtostderr: (3.655590693s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-891062" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.65516301s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-891062
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image load --daemon gcr.io/google-containers/addon-resizer:functional-891062 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 image load --daemon gcr.io/google-containers/addon-resizer:functional-891062 --alsologtostderr: (3.177651868s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-891062" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image save gcr.io/google-containers/addon-resizer:functional-891062 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0311 12:57:07.643327  779890 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:57:07.643910  779890 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:57:07.643923  779890 out.go:304] Setting ErrFile to fd 2...
	I0311 12:57:07.643929  779890 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:57:07.644307  779890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 12:57:07.645725  779890 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:57:07.645902  779890 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:57:07.646424  779890 cli_runner.go:164] Run: docker container inspect functional-891062 --format={{.State.Status}}
	I0311 12:57:07.662766  779890 ssh_runner.go:195] Run: systemctl --version
	I0311 12:57:07.662847  779890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-891062
	I0311 12:57:07.679482  779890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33758 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/functional-891062/id_rsa Username:docker}
	I0311 12:57:07.769267  779890 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0311 12:57:07.769339  779890 cache_images.go:254] Failed to load cached images for profile functional-891062. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0311 12:57:07.769365  779890 cache_images.go:262] succeeded pushing to: 
	I0311 12:57:07.769370  779890 cache_images.go:263] failed pushing to: functional-891062

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (385.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-070145 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-070145 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m20.741472798s)

                                                
                                                
-- stdout --
	* [old-k8s-version-070145] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-070145" primary control-plane node in "old-k8s-version-070145" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Restarting existing docker container for "old-k8s-version-070145" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-070145 addons enable metrics-server
	
	* Enabled addons: dashboard, storage-provisioner, metrics-server, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:35:02.163032  944071 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:35:02.163387  944071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:35:02.163402  944071 out.go:304] Setting ErrFile to fd 2...
	I0311 13:35:02.163409  944071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:35:02.163667  944071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 13:35:02.164057  944071 out.go:298] Setting JSON to false
	I0311 13:35:02.165506  944071 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19047,"bootTime":1710145056,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 13:35:02.165633  944071 start.go:139] virtualization:  
	I0311 13:35:02.169452  944071 out.go:177] * [old-k8s-version-070145] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 13:35:02.172823  944071 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 13:35:02.175011  944071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:35:02.173089  944071 notify.go:220] Checking for updates...
	I0311 13:35:02.177163  944071 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 13:35:02.179361  944071 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 13:35:02.181950  944071 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 13:35:02.183891  944071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:35:02.186400  944071 config.go:182] Loaded profile config "old-k8s-version-070145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0311 13:35:02.189042  944071 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 13:35:02.191067  944071 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:35:02.229552  944071 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 13:35:02.229661  944071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:35:02.309183  944071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-11 13:35:02.298723935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:35:02.309342  944071 docker.go:295] overlay module found
	I0311 13:35:02.315100  944071 out.go:177] * Using the docker driver based on existing profile
	I0311 13:35:02.317172  944071 start.go:297] selected driver: docker
	I0311 13:35:02.317197  944071 start.go:901] validating driver "docker" against &{Name:old-k8s-version-070145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-070145 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:35:02.317316  944071 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:35:02.317906  944071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:35:02.372039  944071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-11 13:35:02.363104117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:35:02.372433  944071 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:35:02.372468  944071 cni.go:84] Creating CNI manager for ""
	I0311 13:35:02.372485  944071 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 13:35:02.372527  944071 start.go:340] cluster config:
	{Name:old-k8s-version-070145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-070145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:35:02.375054  944071 out.go:177] * Starting "old-k8s-version-070145" primary control-plane node in "old-k8s-version-070145" cluster
	I0311 13:35:02.376997  944071 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 13:35:02.378734  944071 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 13:35:02.380506  944071 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0311 13:35:02.380567  944071 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0311 13:35:02.380595  944071 cache.go:56] Caching tarball of preloaded images
	I0311 13:35:02.380609  944071 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 13:35:02.380682  944071 preload.go:173] Found /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:35:02.380691  944071 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0311 13:35:02.380837  944071 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/config.json ...
	I0311 13:35:02.403554  944071 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0311 13:35:02.403579  944071 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0311 13:35:02.403597  944071 cache.go:194] Successfully downloaded all kic artifacts
	I0311 13:35:02.403626  944071 start.go:360] acquireMachinesLock for old-k8s-version-070145: {Name:mkbdf339bb42f6ca666483787f07aea7b59195a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:35:02.403745  944071 start.go:364] duration metric: took 51.339µs to acquireMachinesLock for "old-k8s-version-070145"
	I0311 13:35:02.403777  944071 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:35:02.403796  944071 fix.go:54] fixHost starting: 
	I0311 13:35:02.404072  944071 cli_runner.go:164] Run: docker container inspect old-k8s-version-070145 --format={{.State.Status}}
	I0311 13:35:02.420295  944071 fix.go:112] recreateIfNeeded on old-k8s-version-070145: state=Stopped err=<nil>
	W0311 13:35:02.420327  944071 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:35:02.422778  944071 out.go:177] * Restarting existing docker container for "old-k8s-version-070145" ...
	I0311 13:35:02.424920  944071 cli_runner.go:164] Run: docker start old-k8s-version-070145
	I0311 13:35:02.778074  944071 cli_runner.go:164] Run: docker container inspect old-k8s-version-070145 --format={{.State.Status}}
	I0311 13:35:02.805004  944071 kic.go:430] container "old-k8s-version-070145" state is running.
	I0311 13:35:02.805393  944071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-070145
	I0311 13:35:02.838293  944071 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/config.json ...
	I0311 13:35:02.838550  944071 machine.go:94] provisionDockerMachine start ...
	I0311 13:35:02.838614  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:02.868324  944071 main.go:141] libmachine: Using SSH client type: native
	I0311 13:35:02.868604  944071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34038 <nil> <nil>}
	I0311 13:35:02.868613  944071 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 13:35:02.869598  944071 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0311 13:35:06.025174  944071 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-070145
	
	I0311 13:35:06.025205  944071 ubuntu.go:169] provisioning hostname "old-k8s-version-070145"
	I0311 13:35:06.025276  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:06.068536  944071 main.go:141] libmachine: Using SSH client type: native
	I0311 13:35:06.068830  944071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34038 <nil> <nil>}
	I0311 13:35:06.068849  944071 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-070145 && echo "old-k8s-version-070145" | sudo tee /etc/hostname
	I0311 13:35:06.244137  944071 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-070145
	
	I0311 13:35:06.244214  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:06.269721  944071 main.go:141] libmachine: Using SSH client type: native
	I0311 13:35:06.269995  944071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34038 <nil> <nil>}
	I0311 13:35:06.270019  944071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-070145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-070145/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-070145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 13:35:06.424306  944071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 13:35:06.424337  944071 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18350-741028/.minikube CaCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18350-741028/.minikube}
	I0311 13:35:06.424357  944071 ubuntu.go:177] setting up certificates
	I0311 13:35:06.424366  944071 provision.go:84] configureAuth start
	I0311 13:35:06.424430  944071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-070145
	I0311 13:35:06.462283  944071 provision.go:143] copyHostCerts
	I0311 13:35:06.462358  944071 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-741028/.minikube/ca.pem, removing ...
	I0311 13:35:06.462367  944071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-741028/.minikube/ca.pem
	I0311 13:35:06.462440  944071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/ca.pem (1078 bytes)
	I0311 13:35:06.462534  944071 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-741028/.minikube/cert.pem, removing ...
	I0311 13:35:06.462539  944071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-741028/.minikube/cert.pem
	I0311 13:35:06.462564  944071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/cert.pem (1123 bytes)
	I0311 13:35:06.462613  944071 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-741028/.minikube/key.pem, removing ...
	I0311 13:35:06.462617  944071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-741028/.minikube/key.pem
	I0311 13:35:06.462639  944071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18350-741028/.minikube/key.pem (1675 bytes)
	I0311 13:35:06.462682  944071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-070145 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-070145]
	I0311 13:35:07.247552  944071 provision.go:177] copyRemoteCerts
	I0311 13:35:07.247629  944071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 13:35:07.247676  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:07.271885  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:07.375107  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 13:35:07.416200  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0311 13:35:07.455973  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 13:35:07.504047  944071 provision.go:87] duration metric: took 1.079655131s to configureAuth
	I0311 13:35:07.504080  944071 ubuntu.go:193] setting minikube options for container-runtime
	I0311 13:35:07.504276  944071 config.go:182] Loaded profile config "old-k8s-version-070145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0311 13:35:07.504290  944071 machine.go:97] duration metric: took 4.665731382s to provisionDockerMachine
	I0311 13:35:07.504299  944071 start.go:293] postStartSetup for "old-k8s-version-070145" (driver="docker")
	I0311 13:35:07.504317  944071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 13:35:07.504375  944071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 13:35:07.504420  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:07.546681  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:07.659129  944071 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 13:35:07.662582  944071 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0311 13:35:07.662621  944071 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0311 13:35:07.662633  944071 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0311 13:35:07.662640  944071 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0311 13:35:07.662656  944071 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-741028/.minikube/addons for local assets ...
	I0311 13:35:07.662730  944071 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-741028/.minikube/files for local assets ...
	I0311 13:35:07.662824  944071 filesync.go:149] local asset: /home/jenkins/minikube-integration/18350-741028/.minikube/files/etc/ssl/certs/7464802.pem -> 7464802.pem in /etc/ssl/certs
	I0311 13:35:07.662944  944071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 13:35:07.675174  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/files/etc/ssl/certs/7464802.pem --> /etc/ssl/certs/7464802.pem (1708 bytes)
	I0311 13:35:07.714819  944071 start.go:296] duration metric: took 210.497871ms for postStartSetup
	I0311 13:35:07.714903  944071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:35:07.714950  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:07.739169  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:07.841624  944071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0311 13:35:07.850075  944071 fix.go:56] duration metric: took 5.446279115s for fixHost
	I0311 13:35:07.850098  944071 start.go:83] releasing machines lock for "old-k8s-version-070145", held for 5.44634108s
	I0311 13:35:07.850174  944071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-070145
	I0311 13:35:07.877869  944071 ssh_runner.go:195] Run: cat /version.json
	I0311 13:35:07.877922  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:07.878211  944071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 13:35:07.878254  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:07.911284  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:07.916411  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:08.020500  944071 ssh_runner.go:195] Run: systemctl --version
	I0311 13:35:08.170429  944071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 13:35:08.178306  944071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0311 13:35:08.214411  944071 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0311 13:35:08.214524  944071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 13:35:08.228730  944071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 13:35:08.228768  944071 start.go:494] detecting cgroup driver to use...
	I0311 13:35:08.228825  944071 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0311 13:35:08.228898  944071 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 13:35:08.248890  944071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 13:35:08.266881  944071 docker.go:217] disabling cri-docker service (if available) ...
	I0311 13:35:08.266972  944071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 13:35:08.281030  944071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 13:35:08.294331  944071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 13:35:08.435913  944071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 13:35:08.583532  944071 docker.go:233] disabling docker service ...
	I0311 13:35:08.583598  944071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 13:35:08.603029  944071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 13:35:08.625280  944071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 13:35:08.790642  944071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 13:35:08.935940  944071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 13:35:08.955574  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:35:08.979014  944071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0311 13:35:08.995642  944071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 13:35:09.007512  944071 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 13:35:09.007643  944071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 13:35:09.022468  944071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 13:35:09.038101  944071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 13:35:09.049099  944071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 13:35:09.069203  944071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 13:35:09.082946  944071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 13:35:09.098197  944071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 13:35:09.111220  944071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 13:35:09.124324  944071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:35:09.261628  944071 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 13:35:09.520501  944071 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0311 13:35:09.520600  944071 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0311 13:35:09.525684  944071 start.go:562] Will wait 60s for crictl version
	I0311 13:35:09.525778  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:35:09.529666  944071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 13:35:09.599259  944071 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0311 13:35:09.599352  944071 ssh_runner.go:195] Run: containerd --version
	I0311 13:35:09.642743  944071 ssh_runner.go:195] Run: containerd --version
	I0311 13:35:09.672608  944071 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0311 13:35:09.674459  944071 cli_runner.go:164] Run: docker network inspect old-k8s-version-070145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 13:35:09.700937  944071 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0311 13:35:09.704671  944071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:35:09.720077  944071 kubeadm.go:877] updating cluster {Name:old-k8s-version-070145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-070145 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 13:35:09.720198  944071 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0311 13:35:09.720272  944071 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 13:35:09.770048  944071 containerd.go:612] all images are preloaded for containerd runtime.
	I0311 13:35:09.770073  944071 containerd.go:519] Images already preloaded, skipping extraction
	I0311 13:35:09.770169  944071 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 13:35:09.832803  944071 containerd.go:612] all images are preloaded for containerd runtime.
	I0311 13:35:09.832826  944071 cache_images.go:84] Images are preloaded, skipping loading
	I0311 13:35:09.832834  944071 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0311 13:35:09.832949  944071 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-070145 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-070145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 13:35:09.833019  944071 ssh_runner.go:195] Run: sudo crictl info
	I0311 13:35:09.882603  944071 cni.go:84] Creating CNI manager for ""
	I0311 13:35:09.882631  944071 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 13:35:09.882643  944071 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 13:35:09.882703  944071 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-070145 NodeName:old-k8s-version-070145 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 13:35:09.882893  944071 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-070145"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 13:35:09.882988  944071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 13:35:09.893232  944071 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 13:35:09.893327  944071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 13:35:09.902875  944071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0311 13:35:09.922553  944071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 13:35:09.958173  944071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0311 13:35:09.983549  944071 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0311 13:35:09.987580  944071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:35:09.999186  944071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:35:10.153935  944071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:35:10.172332  944071 certs.go:68] Setting up /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145 for IP: 192.168.76.2
	I0311 13:35:10.172352  944071 certs.go:194] generating shared ca certs ...
	I0311 13:35:10.172370  944071 certs.go:226] acquiring lock for ca certs: {Name:mk7162cd9946a461c84d2f2cea8ea4b87fd89373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:35:10.172501  944071 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key
	I0311 13:35:10.172545  944071 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key
	I0311 13:35:10.172553  944071 certs.go:256] generating profile certs ...
	I0311 13:35:10.172634  944071 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.key
	I0311 13:35:10.172701  944071 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/apiserver.key.0e765760
	I0311 13:35:10.172740  944071 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/proxy-client.key
	I0311 13:35:10.172978  944071 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/746480.pem (1338 bytes)
	W0311 13:35:10.173024  944071 certs.go:480] ignoring /home/jenkins/minikube-integration/18350-741028/.minikube/certs/746480_empty.pem, impossibly tiny 0 bytes
	I0311 13:35:10.173046  944071 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 13:35:10.173072  944071 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem (1078 bytes)
	I0311 13:35:10.173095  944071 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem (1123 bytes)
	I0311 13:35:10.173127  944071 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/certs/key.pem (1675 bytes)
	I0311 13:35:10.173174  944071 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-741028/.minikube/files/etc/ssl/certs/7464802.pem (1708 bytes)
	I0311 13:35:10.173849  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 13:35:10.225155  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0311 13:35:10.259600  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 13:35:10.293897  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0311 13:35:10.322373  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 13:35:10.357188  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 13:35:10.387122  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 13:35:10.419737  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 13:35:10.474871  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/files/etc/ssl/certs/7464802.pem --> /usr/share/ca-certificates/7464802.pem (1708 bytes)
	I0311 13:35:10.518312  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 13:35:10.556546  944071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-741028/.minikube/certs/746480.pem --> /usr/share/ca-certificates/746480.pem (1338 bytes)
	I0311 13:35:10.589326  944071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 13:35:10.612412  944071 ssh_runner.go:195] Run: openssl version
	I0311 13:35:10.618751  944071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 13:35:10.629840  944071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:35:10.635485  944071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 12:48 /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:35:10.635598  944071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:35:10.644118  944071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 13:35:10.653795  944071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/746480.pem && ln -fs /usr/share/ca-certificates/746480.pem /etc/ssl/certs/746480.pem"
	I0311 13:35:10.663947  944071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/746480.pem
	I0311 13:35:10.668235  944071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 12:54 /usr/share/ca-certificates/746480.pem
	I0311 13:35:10.668347  944071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/746480.pem
	I0311 13:35:10.676094  944071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/746480.pem /etc/ssl/certs/51391683.0"
	I0311 13:35:10.685725  944071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7464802.pem && ln -fs /usr/share/ca-certificates/7464802.pem /etc/ssl/certs/7464802.pem"
	I0311 13:35:10.699418  944071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7464802.pem
	I0311 13:35:10.704251  944071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 12:54 /usr/share/ca-certificates/7464802.pem
	I0311 13:35:10.704368  944071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7464802.pem
	I0311 13:35:10.711745  944071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7464802.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 13:35:10.722190  944071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 13:35:10.727097  944071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 13:35:10.735142  944071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 13:35:10.744055  944071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 13:35:10.751730  944071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 13:35:10.759584  944071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 13:35:10.767111  944071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 13:35:10.774515  944071 kubeadm.go:391] StartCluster: {Name:old-k8s-version-070145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-070145 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:35:10.774681  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0311 13:35:10.774779  944071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 13:35:10.834338  944071 cri.go:89] found id: "6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7"
	I0311 13:35:10.834415  944071 cri.go:89] found id: "a0fc5e182b8975f62b8f621229dd8ec04d5316e628f77ffe9047483f9e338afe"
	I0311 13:35:10.834441  944071 cri.go:89] found id: "596979946483c91004b096e33257a14f33b32425e5bf1c3030c3b9a18ac24a20"
	I0311 13:35:10.834463  944071 cri.go:89] found id: "8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298"
	I0311 13:35:10.834494  944071 cri.go:89] found id: "ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575"
	I0311 13:35:10.834513  944071 cri.go:89] found id: "2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918"
	I0311 13:35:10.834532  944071 cri.go:89] found id: "18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347"
	I0311 13:35:10.834553  944071 cri.go:89] found id: "5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e"
	I0311 13:35:10.834581  944071 cri.go:89] found id: "1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f"
	I0311 13:35:10.834610  944071 cri.go:89] found id: ""
	I0311 13:35:10.834691  944071 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0311 13:35:10.858098  944071 cri.go:116] JSON = null
	W0311 13:35:10.858202  944071 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I0311 13:35:10.858302  944071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 13:35:10.869235  944071 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 13:35:10.869307  944071 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 13:35:10.869327  944071 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 13:35:10.869410  944071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 13:35:10.878429  944071 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 13:35:10.879018  944071 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-070145" does not appear in /home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 13:35:10.879181  944071 kubeconfig.go:62] /home/jenkins/minikube-integration/18350-741028/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-070145" cluster setting kubeconfig missing "old-k8s-version-070145" context setting]
	I0311 13:35:10.879545  944071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/kubeconfig: {Name:mkea9792df2a23b99e9686253371e8a16054b02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:35:10.881278  944071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 13:35:10.891099  944071 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0311 13:35:10.891181  944071 kubeadm.go:591] duration metric: took 21.828316ms to restartPrimaryControlPlane
	I0311 13:35:10.891225  944071 kubeadm.go:393] duration metric: took 116.700726ms to StartCluster
	I0311 13:35:10.891275  944071 settings.go:142] acquiring lock: {Name:mk647fd5a11531f437bba0a4615b0b34bf87ec0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:35:10.891359  944071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 13:35:10.892120  944071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/kubeconfig: {Name:mkea9792df2a23b99e9686253371e8a16054b02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:35:10.892399  944071 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0311 13:35:10.894773  944071 out.go:177] * Verifying Kubernetes components...
	I0311 13:35:10.892821  944071 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 13:35:10.892930  944071 config.go:182] Loaded profile config "old-k8s-version-070145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0311 13:35:10.897097  944071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:35:10.895024  944071 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-070145"
	I0311 13:35:10.897293  944071 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-070145"
	W0311 13:35:10.897330  944071 addons.go:243] addon storage-provisioner should already be in state true
	I0311 13:35:10.897370  944071 host.go:66] Checking if "old-k8s-version-070145" exists ...
	I0311 13:35:10.897943  944071 cli_runner.go:164] Run: docker container inspect old-k8s-version-070145 --format={{.State.Status}}
	I0311 13:35:10.895030  944071 addons.go:69] Setting dashboard=true in profile "old-k8s-version-070145"
	I0311 13:35:10.898226  944071 addons.go:234] Setting addon dashboard=true in "old-k8s-version-070145"
	W0311 13:35:10.898262  944071 addons.go:243] addon dashboard should already be in state true
	I0311 13:35:10.898302  944071 host.go:66] Checking if "old-k8s-version-070145" exists ...
	I0311 13:35:10.898781  944071 cli_runner.go:164] Run: docker container inspect old-k8s-version-070145 --format={{.State.Status}}
	I0311 13:35:10.895034  944071 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-070145"
	I0311 13:35:10.899871  944071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-070145"
	I0311 13:35:10.900155  944071 cli_runner.go:164] Run: docker container inspect old-k8s-version-070145 --format={{.State.Status}}
	I0311 13:35:10.895038  944071 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-070145"
	I0311 13:35:10.900678  944071 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-070145"
	W0311 13:35:10.900715  944071 addons.go:243] addon metrics-server should already be in state true
	I0311 13:35:10.900796  944071 host.go:66] Checking if "old-k8s-version-070145" exists ...
	I0311 13:35:10.901293  944071 cli_runner.go:164] Run: docker container inspect old-k8s-version-070145 --format={{.State.Status}}
	I0311 13:35:10.984306  944071 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-070145"
	W0311 13:35:10.984330  944071 addons.go:243] addon default-storageclass should already be in state true
	I0311 13:35:10.984356  944071 host.go:66] Checking if "old-k8s-version-070145" exists ...
	I0311 13:35:10.984872  944071 cli_runner.go:164] Run: docker container inspect old-k8s-version-070145 --format={{.State.Status}}
	I0311 13:35:10.993682  944071 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:35:11.000394  944071 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0311 13:35:11.000424  944071 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:35:11.002564  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 13:35:11.004715  944071 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 13:35:11.002676  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:11.008968  944071 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0311 13:35:11.011402  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0311 13:35:11.011433  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0311 13:35:11.011510  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:11.009068  944071 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 13:35:11.011745  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 13:35:11.011788  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:11.060842  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:11.087983  944071 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 13:35:11.088006  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 13:35:11.088086  944071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-070145
	I0311 13:35:11.089863  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:11.108063  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:11.140033  944071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/old-k8s-version-070145/id_rsa Username:docker}
	I0311 13:35:11.185963  944071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:35:11.258237  944071 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-070145" to be "Ready" ...
	I0311 13:35:11.287929  944071 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 13:35:11.288003  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 13:35:11.315470  944071 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 13:35:11.315544  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 13:35:11.330727  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0311 13:35:11.330831  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0311 13:35:11.341940  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:35:11.382112  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 13:35:11.384236  944071 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 13:35:11.384314  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 13:35:11.399520  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0311 13:35:11.399548  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0311 13:35:11.465775  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 13:35:11.478317  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0311 13:35:11.478407  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0311 13:35:11.525705  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.525794  944071 retry.go:31] will retry after 233.511653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.548297  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0311 13:35:11.548373  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0311 13:35:11.576394  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0311 13:35:11.576477  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0311 13:35:11.577541  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.577605  944071 retry.go:31] will retry after 184.278931ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.603047  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0311 13:35:11.603123  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0311 13:35:11.626653  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0311 13:35:11.626738  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0311 13:35:11.645198  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.645280  944071 retry.go:31] will retry after 211.040242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.648505  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0311 13:35:11.648542  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0311 13:35:11.672055  944071 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0311 13:35:11.672142  944071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0311 13:35:11.692423  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0311 13:35:11.760172  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:35:11.762391  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0311 13:35:11.796698  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.796807  944071 retry.go:31] will retry after 238.370199ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.857098  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0311 13:35:11.891063  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.891103  944071 retry.go:31] will retry after 220.435217ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0311 13:35:11.891063  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.891118  944071 retry.go:31] will retry after 287.102303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0311 13:35:11.949849  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:11.949913  944071 retry.go:31] will retry after 205.42742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.036138  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0311 13:35:12.112677  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0311 13:35:12.126692  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.126731  944071 retry.go:31] will retry after 417.251064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.156001  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 13:35:12.178782  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0311 13:35:12.201184  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.201229  944071 retry.go:31] will retry after 610.568049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0311 13:35:12.317065  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.317102  944071 retry.go:31] will retry after 713.360596ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0311 13:35:12.353470  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.353505  944071 retry.go:31] will retry after 644.573124ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.544973  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0311 13:35:12.659122  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.659202  944071 retry.go:31] will retry after 516.415125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.812885  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0311 13:35:12.949799  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.949886  944071 retry.go:31] will retry after 889.01404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:12.999177  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:35:13.031070  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 13:35:13.176549  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0311 13:35:13.199154  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:13.199240  944071 retry.go:31] will retry after 751.66925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0311 13:35:13.246675  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:13.246758  944071 retry.go:31] will retry after 934.336065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:13.259233  944071 node_ready.go:53] error getting node "old-k8s-version-070145": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-070145": dial tcp 192.168.76.2:8443: connect: connection refused
	W0311 13:35:13.332854  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:13.332891  944071 retry.go:31] will retry after 632.685793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:13.839836  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0311 13:35:13.948004  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:13.948037  944071 retry.go:31] will retry after 1.393769037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:13.951321  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:35:13.966593  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0311 13:35:14.126903  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:14.126941  944071 retry.go:31] will retry after 1.874900756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0311 13:35:14.127000  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:14.127023  944071 retry.go:31] will retry after 1.535348832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:14.181281  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0311 13:35:14.267238  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:14.267311  944071 retry.go:31] will retry after 1.036888495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:15.259795  944071 node_ready.go:53] error getting node "old-k8s-version-070145": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-070145": dial tcp 192.168.76.2:8443: connect: connection refused
	I0311 13:35:15.305045  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 13:35:15.342687  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0311 13:35:15.404832  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:15.404876  944071 retry.go:31] will retry after 2.011184747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0311 13:35:15.440224  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:15.440308  944071 retry.go:31] will retry after 1.486725896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:15.662604  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0311 13:35:15.743145  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:15.743179  944071 retry.go:31] will retry after 2.310990132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:16.002863  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0311 13:35:16.090914  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:16.090953  944071 retry.go:31] will retry after 1.781655076s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:16.927853  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0311 13:35:17.045226  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:17.045263  944071 retry.go:31] will retry after 3.8356264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:17.416681  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0311 13:35:17.508866  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:17.508897  944071 retry.go:31] will retry after 2.062902733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:17.759666  944071 node_ready.go:53] error getting node "old-k8s-version-070145": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-070145": dial tcp 192.168.76.2:8443: connect: connection refused
	I0311 13:35:17.872881  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0311 13:35:17.958345  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:17.958383  944071 retry.go:31] will retry after 2.08763973s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:18.054747  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0311 13:35:18.134014  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:18.134052  944071 retry.go:31] will retry after 2.809791685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:19.572536  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0311 13:35:19.747756  944071 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:19.747787  944071 retry.go:31] will retry after 4.12512008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0311 13:35:20.046962  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0311 13:35:20.881717  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0311 13:35:20.944118  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:35:23.873395  944071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 13:35:29.751634  944071 node_ready.go:49] node "old-k8s-version-070145" has status "Ready":"True"
	I0311 13:35:29.751677  944071 node_ready.go:38] duration metric: took 18.493357218s for node "old-k8s-version-070145" to be "Ready" ...
	I0311 13:35:29.751693  944071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 13:35:29.893499  944071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-4c948" in "kube-system" namespace to be "Ready" ...
	I0311 13:35:29.934113  944071 pod_ready.go:92] pod "coredns-74ff55c5b-4c948" in "kube-system" namespace has status "Ready":"True"
	I0311 13:35:29.934151  944071 pod_ready.go:81] duration metric: took 40.617984ms for pod "coredns-74ff55c5b-4c948" in "kube-system" namespace to be "Ready" ...
	I0311 13:35:29.934164  944071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-070145" in "kube-system" namespace to be "Ready" ...
	I0311 13:35:29.984555  944071 pod_ready.go:92] pod "etcd-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"True"
	I0311 13:35:29.984577  944071 pod_ready.go:81] duration metric: took 50.405322ms for pod "etcd-old-k8s-version-070145" in "kube-system" namespace to be "Ready" ...
	I0311 13:35:29.984592  944071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-070145" in "kube-system" namespace to be "Ready" ...
	I0311 13:35:31.634842  944071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.587811914s)
	I0311 13:35:31.637689  944071 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-070145 addons enable metrics-server
	
	I0311 13:35:31.635020  944071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.753271533s)
	I0311 13:35:31.635074  944071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.690926353s)
	I0311 13:35:31.645992  944071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.772556582s)
	I0311 13:35:31.646037  944071 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-070145"
	I0311 13:35:31.653920  944071 out.go:177] * Enabled addons: dashboard, storage-provisioner, metrics-server, default-storageclass
	I0311 13:35:31.657278  944071 addons.go:505] duration metric: took 20.764449626s for enable addons: enabled=[dashboard storage-provisioner metrics-server default-storageclass]
	I0311 13:35:31.991613  944071 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:33.992620  944071 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:36.491261  944071 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:38.491850  944071 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:38.991767  944071 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"True"
	I0311 13:35:38.991797  944071 pod_ready.go:81] duration metric: took 9.007197194s for pod "kube-apiserver-old-k8s-version-070145" in "kube-system" namespace to be "Ready" ...
	I0311 13:35:38.991810  944071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace to be "Ready" ...
	I0311 13:35:40.998200  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:42.998586  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:45.074369  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:47.497983  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:49.541193  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:51.998979  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:54.003284  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:56.022666  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:35:58.497885  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:00.499423  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:02.998710  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:05.497863  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:07.500077  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:09.998999  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:11.999519  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:14.001369  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:16.498681  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:18.499694  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:20.503187  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:22.998514  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:25.498020  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:27.500939  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:29.997634  944071 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:30.998256  944071 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"True"
	I0311 13:36:30.998289  944071 pod_ready.go:81] duration metric: took 52.006471128s for pod "kube-controller-manager-old-k8s-version-070145" in "kube-system" namespace to be "Ready" ...
	I0311 13:36:30.998302  944071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vcch" in "kube-system" namespace to be "Ready" ...
	I0311 13:36:31.005232  944071 pod_ready.go:92] pod "kube-proxy-6vcch" in "kube-system" namespace has status "Ready":"True"
	I0311 13:36:31.005256  944071 pod_ready.go:81] duration metric: took 6.945496ms for pod "kube-proxy-6vcch" in "kube-system" namespace to be "Ready" ...
	I0311 13:36:31.005268  944071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace to be "Ready" ...
	I0311 13:36:33.014131  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:35.512411  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:38.013190  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:40.025830  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:42.513659  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:44.514711  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:47.012132  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:49.511968  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:52.012324  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:54.014329  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:56.511989  944071 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"False"
	I0311 13:36:58.014747  944071 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace has status "Ready":"True"
	I0311 13:36:58.014770  944071 pod_ready.go:81] duration metric: took 27.009494044s for pod "kube-scheduler-old-k8s-version-070145" in "kube-system" namespace to be "Ready" ...
	I0311 13:36:58.014782  944071 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace to be "Ready" ...
	I0311 13:37:00.079971  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:02.521800  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:05.022195  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:07.521698  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:10.022779  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:12.526266  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:15.027548  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:17.522690  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:20.022872  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:22.521266  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:25.022746  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:27.520396  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:30.023004  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:32.522658  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:35.022062  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:37.025226  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:39.521295  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:41.521822  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:44.022424  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:46.521943  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:49.021913  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:51.022358  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:53.028148  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:55.520965  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:37:58.021680  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:00.113871  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:02.521058  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:05.022050  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:07.520924  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:09.521343  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:12.021506  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:14.521978  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:16.522381  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:19.020844  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:21.022499  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:23.026881  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:25.520562  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:27.521541  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:30.027373  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:32.520831  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:35.021894  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:37.023151  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:39.521017  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:41.521963  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:43.522808  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:46.022108  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:48.022806  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:50.032049  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:52.521328  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:54.521824  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:57.026587  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:38:59.521725  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:01.521871  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:04.020649  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:06.021872  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:08.521591  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:11.022004  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:13.521634  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:16.022480  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:18.024938  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:20.520794  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:22.521838  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:25.022412  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:27.521489  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:29.522573  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:32.022630  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:34.522230  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:37.026458  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:39.520721  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:41.521250  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:43.528601  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:46.021997  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:48.022166  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:50.022559  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:52.521663  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:54.522497  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:56.525794  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:39:59.020959  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:01.022004  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:03.027396  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:05.520987  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:07.521129  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:09.521623  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:12.025796  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:14.521394  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:16.521770  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:18.522436  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:21.023784  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:23.027065  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:25.523504  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:28.021381  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:30.024110  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:32.107226  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:34.521359  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:37.023356  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:39.522008  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:42.024191  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:44.522287  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:47.022156  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:49.520617  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:51.521039  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:53.521085  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:56.021763  944071 pod_ready.go:102] pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace has status "Ready":"False"
	I0311 13:40:58.021428  944071 pod_ready.go:81] duration metric: took 4m0.006632014s for pod "metrics-server-9975d5f86-fjvd8" in "kube-system" namespace to be "Ready" ...
	E0311 13:40:58.021456  944071 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0311 13:40:58.021464  944071 pod_ready.go:38] duration metric: took 5m28.269759464s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 13:40:58.021479  944071 api_server.go:52] waiting for apiserver process to appear ...
	I0311 13:40:58.021508  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0311 13:40:58.021569  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 13:40:58.079449  944071 cri.go:89] found id: "2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb"
	I0311 13:40:58.079474  944071 cri.go:89] found id: "1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f"
	I0311 13:40:58.079479  944071 cri.go:89] found id: ""
	I0311 13:40:58.079487  944071 logs.go:276] 2 containers: [2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb 1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f]
	I0311 13:40:58.079545  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.083484  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.087301  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0311 13:40:58.087377  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 13:40:58.132274  944071 cri.go:89] found id: "3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760"
	I0311 13:40:58.132295  944071 cri.go:89] found id: "5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e"
	I0311 13:40:58.132300  944071 cri.go:89] found id: ""
	I0311 13:40:58.132309  944071 logs.go:276] 2 containers: [3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760 5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e]
	I0311 13:40:58.132371  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.136329  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.139894  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0311 13:40:58.139981  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 13:40:58.183800  944071 cri.go:89] found id: "1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1"
	I0311 13:40:58.183839  944071 cri.go:89] found id: "6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7"
	I0311 13:40:58.183844  944071 cri.go:89] found id: ""
	I0311 13:40:58.183857  944071 logs.go:276] 2 containers: [1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1 6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7]
	I0311 13:40:58.183941  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.187937  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.191733  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0311 13:40:58.191811  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 13:40:58.248640  944071 cri.go:89] found id: "39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1"
	I0311 13:40:58.248662  944071 cri.go:89] found id: "18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347"
	I0311 13:40:58.248667  944071 cri.go:89] found id: ""
	I0311 13:40:58.248674  944071 logs.go:276] 2 containers: [39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1 18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347]
	I0311 13:40:58.248737  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.252691  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.256116  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0311 13:40:58.256183  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 13:40:58.302491  944071 cri.go:89] found id: "1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5"
	I0311 13:40:58.302522  944071 cri.go:89] found id: "8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298"
	I0311 13:40:58.302528  944071 cri.go:89] found id: ""
	I0311 13:40:58.302536  944071 logs.go:276] 2 containers: [1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5 8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298]
	I0311 13:40:58.302678  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.306935  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.310475  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 13:40:58.310580  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 13:40:58.373172  944071 cri.go:89] found id: "bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79"
	I0311 13:40:58.373201  944071 cri.go:89] found id: "2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918"
	I0311 13:40:58.373219  944071 cri.go:89] found id: ""
	I0311 13:40:58.373249  944071 logs.go:276] 2 containers: [bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79 2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918]
	I0311 13:40:58.373325  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.378954  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.383201  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0311 13:40:58.383281  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 13:40:58.426540  944071 cri.go:89] found id: "881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1"
	I0311 13:40:58.426561  944071 cri.go:89] found id: "ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575"
	I0311 13:40:58.426565  944071 cri.go:89] found id: ""
	I0311 13:40:58.426578  944071 logs.go:276] 2 containers: [881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1 ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575]
	I0311 13:40:58.426637  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.430233  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.433871  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 13:40:58.433950  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 13:40:58.481178  944071 cri.go:89] found id: "bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9"
	I0311 13:40:58.481198  944071 cri.go:89] found id: ""
	I0311 13:40:58.481206  944071 logs.go:276] 1 containers: [bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9]
	I0311 13:40:58.481265  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.485075  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0311 13:40:58.485147  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 13:40:58.524320  944071 cri.go:89] found id: "91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503"
	I0311 13:40:58.524341  944071 cri.go:89] found id: "96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2"
	I0311 13:40:58.524345  944071 cri.go:89] found id: ""
	I0311 13:40:58.524352  944071 logs.go:276] 2 containers: [91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503 96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2]
	I0311 13:40:58.524445  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.530121  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:40:58.534188  944071 logs.go:123] Gathering logs for etcd [5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e] ...
	I0311 13:40:58.534225  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e"
	I0311 13:40:58.605868  944071 logs.go:123] Gathering logs for coredns [1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1] ...
	I0311 13:40:58.605905  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1"
	I0311 13:40:58.653793  944071 logs.go:123] Gathering logs for kindnet [881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1] ...
	I0311 13:40:58.653826  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1"
	I0311 13:40:58.695746  944071 logs.go:123] Gathering logs for kubernetes-dashboard [bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9] ...
	I0311 13:40:58.695778  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9"
	I0311 13:40:58.743564  944071 logs.go:123] Gathering logs for storage-provisioner [96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2] ...
	I0311 13:40:58.743594  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2"
	I0311 13:40:58.783019  944071 logs.go:123] Gathering logs for kubelet ...
	I0311 13:40:58.783048  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0311 13:40:58.831302  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.459417     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:40:58.831534  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.459726     664 reflector.go:138] object-"kube-system"/"coredns-token-qkqt2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-qkqt2" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:40:58.831739  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.459902     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:40:58.831955  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.460076     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-gxdwg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gxdwg" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:40:58.832180  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.460251     664 reflector.go:138] object-"kube-system"/"kindnet-token-rnqx8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rnqx8" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:40:58.841316  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:30 old-k8s-version-070145 kubelet[664]: E0311 13:35:30.669341     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:40:58.841517  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:31 old-k8s-version-070145 kubelet[664]: E0311 13:35:31.153659     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.844261  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:44 old-k8s-version-070145 kubelet[664]: E0311 13:35:44.954825     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:40:58.845967  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:56 old-k8s-version-070145 kubelet[664]: E0311 13:35:56.925831     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.846758  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:59 old-k8s-version-070145 kubelet[664]: E0311 13:35:59.291208     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.847221  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:00 old-k8s-version-070145 kubelet[664]: E0311 13:36:00.325909     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.847657  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:01 old-k8s-version-070145 kubelet[664]: E0311 13:36:01.328676     664 pod_workers.go:191] Error syncing pod 56c93912-57eb-4ab7-8853-172caa7e74d0 ("storage-provisioner_kube-system(56c93912-57eb-4ab7-8853-172caa7e74d0)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(56c93912-57eb-4ab7-8853-172caa7e74d0)"
	W0311 13:40:58.847990  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:03 old-k8s-version-070145 kubelet[664]: E0311 13:36:03.832442     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.850435  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:08 old-k8s-version-070145 kubelet[664]: E0311 13:36:08.925493     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:40:58.851366  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:14 old-k8s-version-070145 kubelet[664]: E0311 13:36:14.371867     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.851678  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:22 old-k8s-version-070145 kubelet[664]: E0311 13:36:22.915611     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.852002  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:23 old-k8s-version-070145 kubelet[664]: E0311 13:36:23.832156     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.852591  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:35 old-k8s-version-070145 kubelet[664]: E0311 13:36:35.418432     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.852784  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:35 old-k8s-version-070145 kubelet[664]: E0311 13:36:35.917817     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.853109  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:43 old-k8s-version-070145 kubelet[664]: E0311 13:36:43.832567     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.855518  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:49 old-k8s-version-070145 kubelet[664]: E0311 13:36:49.924277     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:40:58.855845  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:56 old-k8s-version-070145 kubelet[664]: E0311 13:36:56.917736     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.856029  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:03 old-k8s-version-070145 kubelet[664]: E0311 13:37:03.915495     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.856351  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:08 old-k8s-version-070145 kubelet[664]: E0311 13:37:08.918019     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.856536  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:14 old-k8s-version-070145 kubelet[664]: E0311 13:37:14.917495     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.857152  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:23 old-k8s-version-070145 kubelet[664]: E0311 13:37:23.531510     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.857484  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:24 old-k8s-version-070145 kubelet[664]: E0311 13:37:24.534676     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.857666  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:28 old-k8s-version-070145 kubelet[664]: E0311 13:37:28.918169     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.857994  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:36 old-k8s-version-070145 kubelet[664]: E0311 13:37:36.918270     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.858179  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:42 old-k8s-version-070145 kubelet[664]: E0311 13:37:42.915790     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.858502  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:48 old-k8s-version-070145 kubelet[664]: E0311 13:37:48.916052     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.858685  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:54 old-k8s-version-070145 kubelet[664]: E0311 13:37:54.916972     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.859049  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:03 old-k8s-version-070145 kubelet[664]: E0311 13:38:03.915594     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.859240  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:08 old-k8s-version-070145 kubelet[664]: E0311 13:38:08.915793     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.859598  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:17 old-k8s-version-070145 kubelet[664]: E0311 13:38:17.915139     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.862035  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:23 old-k8s-version-070145 kubelet[664]: E0311 13:38:23.923389     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:40:58.862362  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:29 old-k8s-version-070145 kubelet[664]: E0311 13:38:29.915142     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.862546  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:34 old-k8s-version-070145 kubelet[664]: E0311 13:38:34.915655     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.862872  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:41 old-k8s-version-070145 kubelet[664]: E0311 13:38:41.915198     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.863052  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:45 old-k8s-version-070145 kubelet[664]: E0311 13:38:45.915534     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.863633  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:54 old-k8s-version-070145 kubelet[664]: E0311 13:38:54.718973     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.863819  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:58 old-k8s-version-070145 kubelet[664]: E0311 13:38:58.915749     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.864145  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:03 old-k8s-version-070145 kubelet[664]: E0311 13:39:03.832416     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.864327  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:09 old-k8s-version-070145 kubelet[664]: E0311 13:39:09.916058     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.864660  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:17 old-k8s-version-070145 kubelet[664]: E0311 13:39:17.915142     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.864850  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:24 old-k8s-version-070145 kubelet[664]: E0311 13:39:24.915425     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.865177  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:31 old-k8s-version-070145 kubelet[664]: E0311 13:39:31.915158     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.865362  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:35 old-k8s-version-070145 kubelet[664]: E0311 13:39:35.915581     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.865686  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:42 old-k8s-version-070145 kubelet[664]: E0311 13:39:42.915728     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.865872  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:48 old-k8s-version-070145 kubelet[664]: E0311 13:39:48.916029     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.866200  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:56 old-k8s-version-070145 kubelet[664]: E0311 13:39:56.916329     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.866382  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:59 old-k8s-version-070145 kubelet[664]: E0311 13:39:59.916223     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.866709  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:11 old-k8s-version-070145 kubelet[664]: E0311 13:40:11.915247     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.866890  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:11 old-k8s-version-070145 kubelet[664]: E0311 13:40:11.916186     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.867073  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:22 old-k8s-version-070145 kubelet[664]: E0311 13:40:22.921762     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.867457  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:26 old-k8s-version-070145 kubelet[664]: E0311 13:40:26.915380     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.867646  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:36 old-k8s-version-070145 kubelet[664]: E0311 13:40:36.915484     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.867975  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: E0311 13:40:39.915140     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:58.868159  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:50 old-k8s-version-070145 kubelet[664]: E0311 13:40:50.915444     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:58.868487  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: E0311 13:40:53.915252     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	I0311 13:40:58.868498  944071 logs.go:123] Gathering logs for kube-scheduler [18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347] ...
	I0311 13:40:58.868512  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347"
	I0311 13:40:58.914662  944071 logs.go:123] Gathering logs for etcd [3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760] ...
	I0311 13:40:58.914696  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760"
	I0311 13:40:58.961521  944071 logs.go:123] Gathering logs for kube-scheduler [39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1] ...
	I0311 13:40:58.961608  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1"
	I0311 13:40:59.005366  944071 logs.go:123] Gathering logs for kube-proxy [1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5] ...
	I0311 13:40:59.005475  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5"
	I0311 13:40:59.045829  944071 logs.go:123] Gathering logs for kube-proxy [8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298] ...
	I0311 13:40:59.045859  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298"
	I0311 13:40:59.085076  944071 logs.go:123] Gathering logs for kube-controller-manager [bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79] ...
	I0311 13:40:59.085105  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79"
	I0311 13:40:59.144503  944071 logs.go:123] Gathering logs for container status ...
	I0311 13:40:59.144537  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:40:59.218905  944071 logs.go:123] Gathering logs for storage-provisioner [91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503] ...
	I0311 13:40:59.218933  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503"
	I0311 13:40:59.263075  944071 logs.go:123] Gathering logs for dmesg ...
	I0311 13:40:59.263103  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:40:59.282074  944071 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:40:59.282102  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:40:59.441208  944071 logs.go:123] Gathering logs for kube-apiserver [2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb] ...
	I0311 13:40:59.441237  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb"
	I0311 13:40:59.508829  944071 logs.go:123] Gathering logs for kube-apiserver [1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f] ...
	I0311 13:40:59.508869  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f"
	I0311 13:40:59.572255  944071 logs.go:123] Gathering logs for coredns [6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7] ...
	I0311 13:40:59.572294  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7"
	I0311 13:40:59.620790  944071 logs.go:123] Gathering logs for kube-controller-manager [2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918] ...
	I0311 13:40:59.620843  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918"
	I0311 13:40:59.698473  944071 logs.go:123] Gathering logs for kindnet [ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575] ...
	I0311 13:40:59.698506  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575"
	I0311 13:40:59.743547  944071 logs.go:123] Gathering logs for containerd ...
	I0311 13:40:59.743574  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0311 13:40:59.807738  944071 out.go:304] Setting ErrFile to fd 2...
	I0311 13:40:59.807772  944071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0311 13:40:59.807839  944071 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0311 13:40:59.807853  944071 out.go:239]   Mar 11 13:40:26 old-k8s-version-070145 kubelet[664]: E0311 13:40:26.915380     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	  Mar 11 13:40:26 old-k8s-version-070145 kubelet[664]: E0311 13:40:26.915380     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:59.807868  944071 out.go:239]   Mar 11 13:40:36 old-k8s-version-070145 kubelet[664]: E0311 13:40:36.915484     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 11 13:40:36 old-k8s-version-070145 kubelet[664]: E0311 13:40:36.915484     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:59.807878  944071 out.go:239]   Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: E0311 13:40:39.915140     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	  Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: E0311 13:40:39.915140     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:40:59.807906  944071 out.go:239]   Mar 11 13:40:50 old-k8s-version-070145 kubelet[664]: E0311 13:40:50.915444     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 11 13:40:50 old-k8s-version-070145 kubelet[664]: E0311 13:40:50.915444     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:40:59.807914  944071 out.go:239]   Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: E0311 13:40:53.915252     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	  Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: E0311 13:40:53.915252     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	I0311 13:40:59.807928  944071 out.go:304] Setting ErrFile to fd 2...
	I0311 13:40:59.807934  944071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:41:09.809197  944071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:41:09.823369  944071 api_server.go:72] duration metric: took 5m58.93090375s to wait for apiserver process to appear ...
	I0311 13:41:09.823396  944071 api_server.go:88] waiting for apiserver healthz status ...
	I0311 13:41:09.823436  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0311 13:41:09.823498  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 13:41:09.877840  944071 cri.go:89] found id: "2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb"
	I0311 13:41:09.877880  944071 cri.go:89] found id: "1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f"
	I0311 13:41:09.877888  944071 cri.go:89] found id: ""
	I0311 13:41:09.877895  944071 logs.go:276] 2 containers: [2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb 1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f]
	I0311 13:41:09.877950  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:09.881988  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:09.885782  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0311 13:41:09.885874  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 13:41:09.941083  944071 cri.go:89] found id: "3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760"
	I0311 13:41:09.941106  944071 cri.go:89] found id: "5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e"
	I0311 13:41:09.941112  944071 cri.go:89] found id: ""
	I0311 13:41:09.941119  944071 logs.go:276] 2 containers: [3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760 5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e]
	I0311 13:41:09.941174  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:09.946146  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:09.949966  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0311 13:41:09.950040  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 13:41:09.995065  944071 cri.go:89] found id: "1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1"
	I0311 13:41:09.995089  944071 cri.go:89] found id: "6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7"
	I0311 13:41:09.995095  944071 cri.go:89] found id: ""
	I0311 13:41:09.995102  944071 logs.go:276] 2 containers: [1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1 6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7]
	I0311 13:41:09.995157  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.001750  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.008139  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0311 13:41:10.008251  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 13:41:10.086211  944071 cri.go:89] found id: "39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1"
	I0311 13:41:10.086243  944071 cri.go:89] found id: "18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347"
	I0311 13:41:10.086249  944071 cri.go:89] found id: ""
	I0311 13:41:10.086257  944071 logs.go:276] 2 containers: [39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1 18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347]
	I0311 13:41:10.086316  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.092285  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.096429  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0311 13:41:10.096511  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 13:41:10.160818  944071 cri.go:89] found id: "1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5"
	I0311 13:41:10.160838  944071 cri.go:89] found id: "8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298"
	I0311 13:41:10.160842  944071 cri.go:89] found id: ""
	I0311 13:41:10.160850  944071 logs.go:276] 2 containers: [1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5 8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298]
	I0311 13:41:10.160905  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.167099  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.172922  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 13:41:10.172999  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 13:41:10.240216  944071 cri.go:89] found id: "bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79"
	I0311 13:41:10.240235  944071 cri.go:89] found id: "2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918"
	I0311 13:41:10.240239  944071 cri.go:89] found id: ""
	I0311 13:41:10.240246  944071 logs.go:276] 2 containers: [bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79 2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918]
	I0311 13:41:10.240304  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.248593  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.252452  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0311 13:41:10.252524  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 13:41:10.303927  944071 cri.go:89] found id: "881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1"
	I0311 13:41:10.303951  944071 cri.go:89] found id: "ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575"
	I0311 13:41:10.303962  944071 cri.go:89] found id: ""
	I0311 13:41:10.303970  944071 logs.go:276] 2 containers: [881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1 ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575]
	I0311 13:41:10.304022  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.310472  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.314370  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 13:41:10.314439  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 13:41:10.384102  944071 cri.go:89] found id: "bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9"
	I0311 13:41:10.384123  944071 cri.go:89] found id: ""
	I0311 13:41:10.384132  944071 logs.go:276] 1 containers: [bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9]
	I0311 13:41:10.384187  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.390472  944071 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0311 13:41:10.390545  944071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 13:41:10.438709  944071 cri.go:89] found id: "91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503"
	I0311 13:41:10.438731  944071 cri.go:89] found id: "96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2"
	I0311 13:41:10.438736  944071 cri.go:89] found id: ""
	I0311 13:41:10.438743  944071 logs.go:276] 2 containers: [91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503 96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2]
	I0311 13:41:10.438815  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.442584  944071 ssh_runner.go:195] Run: which crictl
	I0311 13:41:10.445932  944071 logs.go:123] Gathering logs for kubernetes-dashboard [bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9] ...
	I0311 13:41:10.445962  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9"
	I0311 13:41:10.516714  944071 logs.go:123] Gathering logs for storage-provisioner [91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503] ...
	I0311 13:41:10.516770  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503"
	I0311 13:41:10.587649  944071 logs.go:123] Gathering logs for container status ...
	I0311 13:41:10.587675  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:41:10.686391  944071 logs.go:123] Gathering logs for coredns [1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1] ...
	I0311 13:41:10.686423  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1"
	I0311 13:41:10.740465  944071 logs.go:123] Gathering logs for kube-scheduler [39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1] ...
	I0311 13:41:10.740496  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1"
	I0311 13:41:10.794246  944071 logs.go:123] Gathering logs for kube-controller-manager [2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918] ...
	I0311 13:41:10.794273  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918"
	I0311 13:41:10.868780  944071 logs.go:123] Gathering logs for kindnet [881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1] ...
	I0311 13:41:10.868853  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1"
	I0311 13:41:10.916588  944071 logs.go:123] Gathering logs for kube-scheduler [18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347] ...
	I0311 13:41:10.916724  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347"
	I0311 13:41:10.984043  944071 logs.go:123] Gathering logs for kube-proxy [8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298] ...
	I0311 13:41:10.984216  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298"
	I0311 13:41:11.051457  944071 logs.go:123] Gathering logs for kindnet [ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575] ...
	I0311 13:41:11.051531  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575"
	I0311 13:41:11.108664  944071 logs.go:123] Gathering logs for containerd ...
	I0311 13:41:11.108778  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0311 13:41:11.182036  944071 logs.go:123] Gathering logs for dmesg ...
	I0311 13:41:11.182130  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:41:11.205719  944071 logs.go:123] Gathering logs for kube-apiserver [1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f] ...
	I0311 13:41:11.208834  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f"
	I0311 13:41:11.288040  944071 logs.go:123] Gathering logs for etcd [3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760] ...
	I0311 13:41:11.288078  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760"
	I0311 13:41:11.360152  944071 logs.go:123] Gathering logs for etcd [5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e] ...
	I0311 13:41:11.360180  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e"
	I0311 13:41:11.443060  944071 logs.go:123] Gathering logs for storage-provisioner [96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2] ...
	I0311 13:41:11.443180  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2"
	I0311 13:41:11.536727  944071 logs.go:123] Gathering logs for kube-proxy [1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5] ...
	I0311 13:41:11.536857  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5"
	I0311 13:41:11.771851  944071 logs.go:123] Gathering logs for kube-controller-manager [bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79] ...
	I0311 13:41:11.771882  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79"
	I0311 13:41:11.936183  944071 logs.go:123] Gathering logs for kubelet ...
	I0311 13:41:11.936224  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0311 13:41:12.006020  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.459417     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:41:12.006319  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.459726     664 reflector.go:138] object-"kube-system"/"coredns-token-qkqt2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-qkqt2" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:41:12.006551  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.459902     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:41:12.006796  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.460076     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-gxdwg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gxdwg" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:41:12.007039  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:29 old-k8s-version-070145 kubelet[664]: E0311 13:35:29.460251     664 reflector.go:138] object-"kube-system"/"kindnet-token-rnqx8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rnqx8" is forbidden: User "system:node:old-k8s-version-070145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-070145' and this object
	W0311 13:41:12.016281  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:30 old-k8s-version-070145 kubelet[664]: E0311 13:35:30.669341     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:41:12.017053  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:31 old-k8s-version-070145 kubelet[664]: E0311 13:35:31.153659     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.019852  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:44 old-k8s-version-070145 kubelet[664]: E0311 13:35:44.954825     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:41:12.021550  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:56 old-k8s-version-070145 kubelet[664]: E0311 13:35:56.925831     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.022347  944071 logs.go:138] Found kubelet problem: Mar 11 13:35:59 old-k8s-version-070145 kubelet[664]: E0311 13:35:59.291208     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.022809  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:00 old-k8s-version-070145 kubelet[664]: E0311 13:36:00.325909     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.023990  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:01 old-k8s-version-070145 kubelet[664]: E0311 13:36:01.328676     664 pod_workers.go:191] Error syncing pod 56c93912-57eb-4ab7-8853-172caa7e74d0 ("storage-provisioner_kube-system(56c93912-57eb-4ab7-8853-172caa7e74d0)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(56c93912-57eb-4ab7-8853-172caa7e74d0)"
	W0311 13:41:12.024343  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:03 old-k8s-version-070145 kubelet[664]: E0311 13:36:03.832442     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.029660  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:08 old-k8s-version-070145 kubelet[664]: E0311 13:36:08.925493     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:41:12.031738  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:14 old-k8s-version-070145 kubelet[664]: E0311 13:36:14.371867     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.032072  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:22 old-k8s-version-070145 kubelet[664]: E0311 13:36:22.915611     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.032410  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:23 old-k8s-version-070145 kubelet[664]: E0311 13:36:23.832156     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.037788  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:35 old-k8s-version-070145 kubelet[664]: E0311 13:36:35.418432     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.038018  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:35 old-k8s-version-070145 kubelet[664]: E0311 13:36:35.917817     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.038345  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:43 old-k8s-version-070145 kubelet[664]: E0311 13:36:43.832567     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.043942  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:49 old-k8s-version-070145 kubelet[664]: E0311 13:36:49.924277     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:41:12.044297  944071 logs.go:138] Found kubelet problem: Mar 11 13:36:56 old-k8s-version-070145 kubelet[664]: E0311 13:36:56.917736     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.044499  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:03 old-k8s-version-070145 kubelet[664]: E0311 13:37:03.915495     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.044968  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:08 old-k8s-version-070145 kubelet[664]: E0311 13:37:08.918019     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.045159  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:14 old-k8s-version-070145 kubelet[664]: E0311 13:37:14.917495     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.045841  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:23 old-k8s-version-070145 kubelet[664]: E0311 13:37:23.531510     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.046261  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:24 old-k8s-version-070145 kubelet[664]: E0311 13:37:24.534676     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.046476  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:28 old-k8s-version-070145 kubelet[664]: E0311 13:37:28.918169     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.046851  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:36 old-k8s-version-070145 kubelet[664]: E0311 13:37:36.918270     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.047070  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:42 old-k8s-version-070145 kubelet[664]: E0311 13:37:42.915790     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.047441  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:48 old-k8s-version-070145 kubelet[664]: E0311 13:37:48.916052     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.047653  944071 logs.go:138] Found kubelet problem: Mar 11 13:37:54 old-k8s-version-070145 kubelet[664]: E0311 13:37:54.916972     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.048023  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:03 old-k8s-version-070145 kubelet[664]: E0311 13:38:03.915594     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.054889  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:08 old-k8s-version-070145 kubelet[664]: E0311 13:38:08.915793     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.056930  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:17 old-k8s-version-070145 kubelet[664]: E0311 13:38:17.915139     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.064675  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:23 old-k8s-version-070145 kubelet[664]: E0311 13:38:23.923389     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:41:12.067039  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:29 old-k8s-version-070145 kubelet[664]: E0311 13:38:29.915142     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.072741  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:34 old-k8s-version-070145 kubelet[664]: E0311 13:38:34.915655     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.073153  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:41 old-k8s-version-070145 kubelet[664]: E0311 13:38:41.915198     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.073345  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:45 old-k8s-version-070145 kubelet[664]: E0311 13:38:45.915534     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.073941  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:54 old-k8s-version-070145 kubelet[664]: E0311 13:38:54.718973     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.074125  944071 logs.go:138] Found kubelet problem: Mar 11 13:38:58 old-k8s-version-070145 kubelet[664]: E0311 13:38:58.915749     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.074451  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:03 old-k8s-version-070145 kubelet[664]: E0311 13:39:03.832416     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.074640  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:09 old-k8s-version-070145 kubelet[664]: E0311 13:39:09.916058     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.074974  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:17 old-k8s-version-070145 kubelet[664]: E0311 13:39:17.915142     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.075174  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:24 old-k8s-version-070145 kubelet[664]: E0311 13:39:24.915425     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.075500  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:31 old-k8s-version-070145 kubelet[664]: E0311 13:39:31.915158     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.075688  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:35 old-k8s-version-070145 kubelet[664]: E0311 13:39:35.915581     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.076023  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:42 old-k8s-version-070145 kubelet[664]: E0311 13:39:42.915728     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.076207  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:48 old-k8s-version-070145 kubelet[664]: E0311 13:39:48.916029     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.076540  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:56 old-k8s-version-070145 kubelet[664]: E0311 13:39:56.916329     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.076723  944071 logs.go:138] Found kubelet problem: Mar 11 13:39:59 old-k8s-version-070145 kubelet[664]: E0311 13:39:59.916223     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.083790  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:11 old-k8s-version-070145 kubelet[664]: E0311 13:40:11.915247     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.084875  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:11 old-k8s-version-070145 kubelet[664]: E0311 13:40:11.916186     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.085112  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:22 old-k8s-version-070145 kubelet[664]: E0311 13:40:22.921762     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.085444  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:26 old-k8s-version-070145 kubelet[664]: E0311 13:40:26.915380     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.085631  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:36 old-k8s-version-070145 kubelet[664]: E0311 13:40:36.915484     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.085956  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: E0311 13:40:39.915140     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.089446  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:50 old-k8s-version-070145 kubelet[664]: E0311 13:40:50.915444     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.089806  944071 logs.go:138] Found kubelet problem: Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: E0311 13:40:53.915252     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.092881  944071 logs.go:138] Found kubelet problem: Mar 11 13:41:04 old-k8s-version-070145 kubelet[664]: E0311 13:41:04.935691     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:41:12.093226  944071 logs.go:138] Found kubelet problem: Mar 11 13:41:07 old-k8s-version-070145 kubelet[664]: E0311 13:41:07.915147     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	I0311 13:41:12.093234  944071 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:41:12.093248  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:41:12.600012  944071 logs.go:123] Gathering logs for kube-apiserver [2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb] ...
	I0311 13:41:12.600048  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb"
	I0311 13:41:12.718027  944071 logs.go:123] Gathering logs for coredns [6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7] ...
	I0311 13:41:12.718060  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7"
	I0311 13:41:12.792099  944071 out.go:304] Setting ErrFile to fd 2...
	I0311 13:41:12.792123  944071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0311 13:41:12.792182  944071 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0311 13:41:12.792192  944071 out.go:239]   Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: E0311 13:40:39.915140     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	  Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: E0311 13:40:39.915140     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.792198  944071 out.go:239]   Mar 11 13:40:50 old-k8s-version-070145 kubelet[664]: E0311 13:40:50.915444     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 11 13:40:50 old-k8s-version-070145 kubelet[664]: E0311 13:40:50.915444     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.792302  944071 out.go:239]   Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: E0311 13:40:53.915252     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	  Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: E0311 13:40:53.915252     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.792311  944071 out.go:239]   Mar 11 13:41:04 old-k8s-version-070145 kubelet[664]: E0311 13:41:04.935691     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	  Mar 11 13:41:04 old-k8s-version-070145 kubelet[664]: E0311 13:41:04.935691     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:41:12.792319  944071 out.go:239]   Mar 11 13:41:07 old-k8s-version-070145 kubelet[664]: E0311 13:41:07.915147     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	  Mar 11 13:41:07 old-k8s-version-070145 kubelet[664]: E0311 13:41:07.915147     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	I0311 13:41:12.792326  944071 out.go:304] Setting ErrFile to fd 2...
	I0311 13:41:12.792340  944071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:41:22.793384  944071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0311 13:41:22.808295  944071 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0311 13:41:22.811005  944071 out.go:177] 
	W0311 13:41:22.812825  944071 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0311 13:41:22.812862  944071 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0311 13:41:22.812880  944071 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0311 13:41:22.812886  944071 out.go:239] * 
	* 
	W0311 13:41:22.814440  944071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:41:22.816345  944071 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-070145 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-070145
helpers_test.go:235: (dbg) docker inspect old-k8s-version-070145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2b58f15dac72da98fcdbecdafc5c14b1959ab1722ad895570c1793b350db4d3",
	        "Created": "2024-03-11T13:31:48.457635198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 944284,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-11T13:35:02.769978988Z",
	            "FinishedAt": "2024-03-11T13:35:01.517476887Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/c2b58f15dac72da98fcdbecdafc5c14b1959ab1722ad895570c1793b350db4d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2b58f15dac72da98fcdbecdafc5c14b1959ab1722ad895570c1793b350db4d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2b58f15dac72da98fcdbecdafc5c14b1959ab1722ad895570c1793b350db4d3/hosts",
	        "LogPath": "/var/lib/docker/containers/c2b58f15dac72da98fcdbecdafc5c14b1959ab1722ad895570c1793b350db4d3/c2b58f15dac72da98fcdbecdafc5c14b1959ab1722ad895570c1793b350db4d3-json.log",
	        "Name": "/old-k8s-version-070145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-070145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-070145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44820a06763716b579fa2e82a7d86f92c4b59f0db07d68827bb731dc1cb14469-init/diff:/var/lib/docker/overlay2/361ff7146c1f8f9f5c07c69a78aa76c291e59293e7654dd235648b6a877bb54d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44820a06763716b579fa2e82a7d86f92c4b59f0db07d68827bb731dc1cb14469/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44820a06763716b579fa2e82a7d86f92c4b59f0db07d68827bb731dc1cb14469/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44820a06763716b579fa2e82a7d86f92c4b59f0db07d68827bb731dc1cb14469/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-070145",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-070145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-070145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-070145",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-070145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "169942820b5672ed9c7c7f40efcb95f7b7a68fb2ec1852cbf28d9a291580c306",
	            "SandboxKey": "/var/run/docker/netns/169942820b56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34038"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34037"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34034"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34036"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34035"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-070145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c2b58f15dac7",
	                        "old-k8s-version-070145"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "930f8d5ae0b9630ca65148854dfdf42bccc33d0bc60375b23b7bbfb10f5b87f5",
	                    "EndpointID": "ec927d4701e3362cbb250a35168c104c0cb7dc1dc7ba515ae9d8e9322fd72be5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-070145",
	                        "c2b58f15dac7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-070145 -n old-k8s-version-070145
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-070145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-070145 logs -n 25: (2.744536438s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-442518                              | cert-expiration-442518   | jenkins | v1.32.0 | 11 Mar 24 13:30 UTC | 11 Mar 24 13:31 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-445766                               | force-systemd-env-445766 | jenkins | v1.32.0 | 11 Mar 24 13:31 UTC | 11 Mar 24 13:31 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-445766                            | force-systemd-env-445766 | jenkins | v1.32.0 | 11 Mar 24 13:31 UTC | 11 Mar 24 13:31 UTC |
	| start   | -p cert-options-508405                                 | cert-options-508405      | jenkins | v1.32.0 | 11 Mar 24 13:31 UTC | 11 Mar 24 13:31 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-508405 ssh                                | cert-options-508405      | jenkins | v1.32.0 | 11 Mar 24 13:31 UTC | 11 Mar 24 13:31 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-508405 -- sudo                         | cert-options-508405      | jenkins | v1.32.0 | 11 Mar 24 13:31 UTC | 11 Mar 24 13:31 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-508405                                 | cert-options-508405      | jenkins | v1.32.0 | 11 Mar 24 13:31 UTC | 11 Mar 24 13:31 UTC |
	| start   | -p old-k8s-version-070145                              | old-k8s-version-070145   | jenkins | v1.32.0 | 11 Mar 24 13:31 UTC | 11 Mar 24 13:34 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-442518                              | cert-expiration-442518   | jenkins | v1.32.0 | 11 Mar 24 13:34 UTC | 11 Mar 24 13:34 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-442518                              | cert-expiration-442518   | jenkins | v1.32.0 | 11 Mar 24 13:34 UTC | 11 Mar 24 13:34 UTC |
	| start   | -p no-preload-740029                                   | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:34 UTC | 11 Mar 24 13:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-070145        | old-k8s-version-070145   | jenkins | v1.32.0 | 11 Mar 24 13:34 UTC | 11 Mar 24 13:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-070145                              | old-k8s-version-070145   | jenkins | v1.32.0 | 11 Mar 24 13:34 UTC | 11 Mar 24 13:35 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-070145             | old-k8s-version-070145   | jenkins | v1.32.0 | 11 Mar 24 13:35 UTC | 11 Mar 24 13:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-070145                              | old-k8s-version-070145   | jenkins | v1.32.0 | 11 Mar 24 13:35 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-740029             | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:35 UTC | 11 Mar 24 13:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-740029                                   | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:35 UTC | 11 Mar 24 13:36 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-740029                  | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:36 UTC | 11 Mar 24 13:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-740029                                   | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:36 UTC | 11 Mar 24 13:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                          |         |         |                     |                     |
	| image   | no-preload-740029 image list                           | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:41 UTC | 11 Mar 24 13:41 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-740029                                   | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:41 UTC | 11 Mar 24 13:41 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-740029                                   | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:41 UTC | 11 Mar 24 13:41 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-740029                                   | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:41 UTC | 11 Mar 24 13:41 UTC |
	| delete  | -p no-preload-740029                                   | no-preload-740029        | jenkins | v1.32.0 | 11 Mar 24 13:41 UTC | 11 Mar 24 13:41 UTC |
	| start   | -p embed-certs-810824                                  | embed-certs-810824       | jenkins | v1.32.0 | 11 Mar 24 13:41 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 13:41:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 13:41:16.166837  954940 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:41:16.167067  954940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:41:16.167077  954940 out.go:304] Setting ErrFile to fd 2...
	I0311 13:41:16.167082  954940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:41:16.167309  954940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 13:41:16.167746  954940 out.go:298] Setting JSON to false
	I0311 13:41:16.168736  954940 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19421,"bootTime":1710145056,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 13:41:16.168861  954940 start.go:139] virtualization:  
	I0311 13:41:16.171462  954940 out.go:177] * [embed-certs-810824] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 13:41:16.173936  954940 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 13:41:16.175702  954940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:41:16.174049  954940 notify.go:220] Checking for updates...
	I0311 13:41:16.179283  954940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 13:41:16.181601  954940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 13:41:16.183539  954940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 13:41:16.185616  954940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:41:16.188720  954940 config.go:182] Loaded profile config "old-k8s-version-070145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0311 13:41:16.188857  954940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:41:16.210093  954940 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 13:41:16.210203  954940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:41:16.276978  954940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 13:41:16.267194749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:41:16.277090  954940 docker.go:295] overlay module found
	I0311 13:41:16.279091  954940 out.go:177] * Using the docker driver based on user configuration
	I0311 13:41:16.280999  954940 start.go:297] selected driver: docker
	I0311 13:41:16.281018  954940 start.go:901] validating driver "docker" against <nil>
	I0311 13:41:16.281052  954940 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:41:16.281685  954940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:41:16.336205  954940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 13:41:16.326031327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:41:16.336369  954940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:41:16.336594  954940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:41:16.338455  954940 out.go:177] * Using Docker driver with root privileges
	I0311 13:41:16.340259  954940 cni.go:84] Creating CNI manager for ""
	I0311 13:41:16.340280  954940 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 13:41:16.340291  954940 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 13:41:16.340402  954940 start.go:340] cluster config:
	{Name:embed-certs-810824 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-810824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:41:16.342525  954940 out.go:177] * Starting "embed-certs-810824" primary control-plane node in "embed-certs-810824" cluster
	I0311 13:41:16.344148  954940 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 13:41:16.346155  954940 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 13:41:16.347874  954940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 13:41:16.347924  954940 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0311 13:41:16.347947  954940 cache.go:56] Caching tarball of preloaded images
	I0311 13:41:16.348037  954940 preload.go:173] Found /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:41:16.348052  954940 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0311 13:41:16.348160  954940 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/embed-certs-810824/config.json ...
	I0311 13:41:16.348190  954940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/embed-certs-810824/config.json: {Name:mk327b6c081f5c215f8a51c34b97fbf9957e350c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:41:16.348288  954940 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 13:41:16.365523  954940 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0311 13:41:16.365551  954940 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0311 13:41:16.365566  954940 cache.go:194] Successfully downloaded all kic artifacts
	I0311 13:41:16.365594  954940 start.go:360] acquireMachinesLock for embed-certs-810824: {Name:mk5e2e2ff648c4dd68f8367eedc78d81e84033d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:41:16.366195  954940 start.go:364] duration metric: took 575.946µs to acquireMachinesLock for "embed-certs-810824"
	I0311 13:41:16.366230  954940 start.go:93] Provisioning new machine with config: &{Name:embed-certs-810824 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-810824 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0311 13:41:16.366333  954940 start.go:125] createHost starting for "" (driver="docker")
	I0311 13:41:12.600012  944071 logs.go:123] Gathering logs for kube-apiserver [2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb] ...
	I0311 13:41:12.600048  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb"
	I0311 13:41:12.718027  944071 logs.go:123] Gathering logs for coredns [6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7] ...
	I0311 13:41:12.718060  944071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7"
	I0311 13:41:12.792099  944071 out.go:304] Setting ErrFile to fd 2...
	I0311 13:41:12.792123  944071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0311 13:41:12.792182  944071 out.go:239] X Problems detected in kubelet:
	W0311 13:41:12.792192  944071 out.go:239]   Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: E0311 13:40:39.915140     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.792198  944071 out.go:239]   Mar 11 13:40:50 old-k8s-version-070145 kubelet[664]: E0311 13:40:50.915444     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0311 13:41:12.792302  944071 out.go:239]   Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: E0311 13:40:53.915252     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	W0311 13:41:12.792311  944071 out.go:239]   Mar 11 13:41:04 old-k8s-version-070145 kubelet[664]: E0311 13:41:04.935691     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0311 13:41:12.792319  944071 out.go:239]   Mar 11 13:41:07 old-k8s-version-070145 kubelet[664]: E0311 13:41:07.915147     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	I0311 13:41:12.792326  944071 out.go:304] Setting ErrFile to fd 2...
	I0311 13:41:12.792340  944071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:41:16.368944  954940 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0311 13:41:16.369196  954940 start.go:159] libmachine.API.Create for "embed-certs-810824" (driver="docker")
	I0311 13:41:16.369245  954940 client.go:168] LocalClient.Create starting
	I0311 13:41:16.369333  954940 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18350-741028/.minikube/certs/ca.pem
	I0311 13:41:16.369373  954940 main.go:141] libmachine: Decoding PEM data...
	I0311 13:41:16.369389  954940 main.go:141] libmachine: Parsing certificate...
	I0311 13:41:16.369452  954940 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18350-741028/.minikube/certs/cert.pem
	I0311 13:41:16.369476  954940 main.go:141] libmachine: Decoding PEM data...
	I0311 13:41:16.369489  954940 main.go:141] libmachine: Parsing certificate...
	I0311 13:41:16.369866  954940 cli_runner.go:164] Run: docker network inspect embed-certs-810824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0311 13:41:16.384897  954940 cli_runner.go:211] docker network inspect embed-certs-810824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0311 13:41:16.384994  954940 network_create.go:281] running [docker network inspect embed-certs-810824] to gather additional debugging logs...
	I0311 13:41:16.385021  954940 cli_runner.go:164] Run: docker network inspect embed-certs-810824
	W0311 13:41:16.400001  954940 cli_runner.go:211] docker network inspect embed-certs-810824 returned with exit code 1
	I0311 13:41:16.400057  954940 network_create.go:284] error running [docker network inspect embed-certs-810824]: docker network inspect embed-certs-810824: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-810824 not found
	I0311 13:41:16.400071  954940 network_create.go:286] output of [docker network inspect embed-certs-810824]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-810824 not found
	
	** /stderr **
	I0311 13:41:16.400167  954940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 13:41:16.417362  954940 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a016e0cbfe3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:23:1a:bb:a6} reservation:<nil>}
	I0311 13:41:16.417804  954940 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-86d71c327011 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:a7:43:00:21} reservation:<nil>}
	I0311 13:41:16.418231  954940 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-29847169e5e9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:19:81:84:86} reservation:<nil>}
	I0311 13:41:16.418581  954940 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-930f8d5ae0b9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:f9:97:db:d1} reservation:<nil>}
	I0311 13:41:16.419218  954940 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025911d0}
	I0311 13:41:16.419259  954940 network_create.go:124] attempt to create docker network embed-certs-810824 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0311 13:41:16.419355  954940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-810824 embed-certs-810824
	I0311 13:41:16.482882  954940 network_create.go:108] docker network embed-certs-810824 192.168.85.0/24 created
	I0311 13:41:16.482918  954940 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-810824" container
	I0311 13:41:16.483021  954940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0311 13:41:16.503458  954940 cli_runner.go:164] Run: docker volume create embed-certs-810824 --label name.minikube.sigs.k8s.io=embed-certs-810824 --label created_by.minikube.sigs.k8s.io=true
	I0311 13:41:16.523793  954940 oci.go:103] Successfully created a docker volume embed-certs-810824
	I0311 13:41:16.523877  954940 cli_runner.go:164] Run: docker run --rm --name embed-certs-810824-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-810824 --entrypoint /usr/bin/test -v embed-certs-810824:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0311 13:41:17.179764  954940 oci.go:107] Successfully prepared a docker volume embed-certs-810824
	I0311 13:41:17.179831  954940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 13:41:17.179855  954940 kic.go:194] Starting extracting preloaded images to volume ...
	I0311 13:41:17.179945  954940 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-810824:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0311 13:41:22.793384  944071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0311 13:41:22.808295  944071 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0311 13:41:22.811005  944071 out.go:177] 
	W0311 13:41:22.812825  944071 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0311 13:41:22.812862  944071 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0311 13:41:22.812880  944071 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0311 13:41:22.812886  944071 out.go:239] * 
	W0311 13:41:22.814440  944071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:41:22.816345  944071 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6159094736fcf       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   418e6754ee6be       dashboard-metrics-scraper-8d5bb5db8-trflx
	91683919c3a68       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   cef72cd2d1288       storage-provisioner
	bc98146e508c7       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   1aa8f11770948       kubernetes-dashboard-cd95d586-zhrbn
	1b76f78246021       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   27e58de389838       coredns-74ff55c5b-4c948
	881db81636d17       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   68bcc03722002       kindnet-2ptss
	1046f75cb6d0b       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   30e2ed4a67120       kube-proxy-6vcch
	bc6ec28f9d79f       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   61b6101c93a3f       busybox
	96c1af2a12c5a       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   cef72cd2d1288       storage-provisioner
	3d91f57195c2a       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   3c3270ff5d797       etcd-old-k8s-version-070145
	39152e7d8a961       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   d14d0e4d93067       kube-scheduler-old-k8s-version-070145
	2db70787e2d49       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   bc8f6d2a0c32f       kube-apiserver-old-k8s-version-070145
	bc4f384ce5455       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   856ddb14e3fb8       kube-controller-manager-old-k8s-version-070145
	72f46a3f79d14       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   6f70a280df3bf       busybox
	6f6759a5fa53b       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   b33bb4bc7d5e7       coredns-74ff55c5b-4c948
	8334568a27c35       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   4d40a7380b157       kube-proxy-6vcch
	ee7a5ecb39413       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   ec13f86f7071c       kindnet-2ptss
	2a8386dbbc357       1df8a2b116bd1       9 minutes ago       Exited              kube-controller-manager     0                   b48d10c3cb039       kube-controller-manager-old-k8s-version-070145
	18ba14631e222       e7605f88f17d6       9 minutes ago       Exited              kube-scheduler              0                   21ad0d8daae22       kube-scheduler-old-k8s-version-070145
	5ce8ca658a489       05b738aa1bc63       9 minutes ago       Exited              etcd                        0                   f1073c3d966d5       etcd-old-k8s-version-070145
	1efd3d3c6b431       2c08bbbc02d3a       9 minutes ago       Exited              kube-apiserver              0                   f2da1085267d3       kube-apiserver-old-k8s-version-070145
	
	
	==> containerd <==
	Mar 11 13:37:22 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:22.941929473Z" level=info msg="CreateContainer within sandbox \"418e6754ee6beffa95e01505a0a1d90b5225ce2ba9bff9d700f08a15f2da3f7d\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"c775ba6e2d78ed50442f20a29d4c6c2aec402b4eda7b85fb4f56831df4a52bfc\""
	Mar 11 13:37:22 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:22.942983421Z" level=info msg="StartContainer for \"c775ba6e2d78ed50442f20a29d4c6c2aec402b4eda7b85fb4f56831df4a52bfc\""
	Mar 11 13:37:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:23.010849570Z" level=info msg="StartContainer for \"c775ba6e2d78ed50442f20a29d4c6c2aec402b4eda7b85fb4f56831df4a52bfc\" returns successfully"
	Mar 11 13:37:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:23.071068116Z" level=info msg="shim disconnected" id=c775ba6e2d78ed50442f20a29d4c6c2aec402b4eda7b85fb4f56831df4a52bfc
	Mar 11 13:37:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:23.071368667Z" level=warning msg="cleaning up after shim disconnected" id=c775ba6e2d78ed50442f20a29d4c6c2aec402b4eda7b85fb4f56831df4a52bfc namespace=k8s.io
	Mar 11 13:37:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:23.071493884Z" level=info msg="cleaning up dead shim"
	Mar 11 13:37:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:23.081135095Z" level=warning msg="cleanup warnings time=\"2024-03-11T13:37:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2953 runtime=io.containerd.runc.v2\n"
	Mar 11 13:37:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:23.534618059Z" level=info msg="RemoveContainer for \"a3d779f141731561ccfb72b7b43ef91de46a3a518a70907b68ae2c8963d66834\""
	Mar 11 13:37:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:37:23.546337308Z" level=info msg="RemoveContainer for \"a3d779f141731561ccfb72b7b43ef91de46a3a518a70907b68ae2c8963d66834\" returns successfully"
	Mar 11 13:38:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:23.916032692Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:38:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:23.921325906Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 11 13:38:23 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:23.922873561Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 11 13:38:53 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:53.917166188Z" level=info msg="CreateContainer within sandbox \"418e6754ee6beffa95e01505a0a1d90b5225ce2ba9bff9d700f08a15f2da3f7d\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 11 13:38:53 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:53.932993130Z" level=info msg="CreateContainer within sandbox \"418e6754ee6beffa95e01505a0a1d90b5225ce2ba9bff9d700f08a15f2da3f7d\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7\""
	Mar 11 13:38:53 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:53.933678589Z" level=info msg="StartContainer for \"6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7\""
	Mar 11 13:38:54 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:54.003504990Z" level=info msg="StartContainer for \"6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7\" returns successfully"
	Mar 11 13:38:54 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:54.035640512Z" level=info msg="shim disconnected" id=6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7
	Mar 11 13:38:54 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:54.035705398Z" level=warning msg="cleaning up after shim disconnected" id=6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7 namespace=k8s.io
	Mar 11 13:38:54 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:54.035717032Z" level=info msg="cleaning up dead shim"
	Mar 11 13:38:54 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:54.052673779Z" level=warning msg="cleanup warnings time=\"2024-03-11T13:38:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3208 runtime=io.containerd.runc.v2\ntime=\"2024-03-11T13:38:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
	Mar 11 13:38:54 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:54.719955443Z" level=info msg="RemoveContainer for \"c775ba6e2d78ed50442f20a29d4c6c2aec402b4eda7b85fb4f56831df4a52bfc\""
	Mar 11 13:38:54 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:38:54.727474550Z" level=info msg="RemoveContainer for \"c775ba6e2d78ed50442f20a29d4c6c2aec402b4eda7b85fb4f56831df4a52bfc\" returns successfully"
	Mar 11 13:41:04 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:41:04.916282219Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:41:04 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:41:04.933034141Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 11 13:41:04 old-k8s-version-070145 containerd[569]: time="2024-03-11T13:41:04.934688954Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> coredns [1b76f78246021579cf21e4531ff3eab7085e6d1e7d053d08b44e0d6140571ec1] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:60428 - 49663 "HINFO IN 8284826612343937346.2595708993875510566. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013097117s
	
	
	==> coredns [6f6759a5fa53b61d59a4eb3efed14f78f772e070b07ebae287535c7bd97f6ac7] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:43949 - 28500 "HINFO IN 1335326925369388077.7612004195877892919. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022431119s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-070145
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-070145
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=old-k8s-version-070145
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T13_32_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 13:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-070145
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 13:41:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 13:41:22 +0000   Mon, 11 Mar 2024 13:32:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 13:41:22 +0000   Mon, 11 Mar 2024 13:32:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 13:41:22 +0000   Mon, 11 Mar 2024 13:32:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 13:41:22 +0000   Mon, 11 Mar 2024 13:32:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-070145
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 b40dd697a6a04c12a5c813bffb5e562b
	  System UUID:                6a772993-19db-4a68-88ee-31a8447783c9
	  Boot ID:                    26506771-5b0e-4b52-8e79-b1a5a7798867
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 coredns-74ff55c5b-4c948                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m44s
	  kube-system                 etcd-old-k8s-version-070145                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m51s
	  kube-system                 kindnet-2ptss                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m44s
	  kube-system                 kube-apiserver-old-k8s-version-070145             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m50s
	  kube-system                 kube-controller-manager-old-k8s-version-070145    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m50s
	  kube-system                 kube-proxy-6vcch                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m44s
	  kube-system                 kube-scheduler-old-k8s-version-070145             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m50s
	  kube-system                 metrics-server-9975d5f86-fjvd8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m36s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m43s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-trflx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-zhrbn               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 9m10s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m10s (x2 over 9m10s)  kubelet     Node old-k8s-version-070145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m10s (x2 over 9m10s)  kubelet     Node old-k8s-version-070145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m10s (x2 over 9m10s)  kubelet     Node old-k8s-version-070145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m10s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m51s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m51s                  kubelet     Node old-k8s-version-070145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m51s                  kubelet     Node old-k8s-version-070145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m51s                  kubelet     Node old-k8s-version-070145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m51s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m44s                  kubelet     Node old-k8s-version-070145 status is now: NodeReady
	  Normal  Starting                 8m42s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m7s                   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m7s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m7s)    kubelet     Node old-k8s-version-070145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m7s)    kubelet     Node old-k8s-version-070145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m7s)    kubelet     Node old-k8s-version-070145 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m54s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001227] FS-Cache: O-key=[8] '5c3e5c0100000000'
	[  +0.000741] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=00000000769f7abe
	[  +0.001119] FS-Cache: N-key=[8] '5c3e5c0100000000'
	[  +0.002760] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001147] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=000000006cb1d8c4
	[  +0.001125] FS-Cache: O-key=[8] '5c3e5c0100000000'
	[  +0.000746] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=00000000849aab82
	[  +0.001155] FS-Cache: N-key=[8] '5c3e5c0100000000'
	[  +2.618490] FS-Cache: Duplicate cookie detected
	[  +0.000786] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001189] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=0000000051073a19
	[  +0.001122] FS-Cache: O-key=[8] '5b3e5c0100000000'
	[  +0.000741] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001577] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=00000000769f7abe
	[  +0.001521] FS-Cache: N-key=[8] '5b3e5c0100000000'
	[  +0.435282] FS-Cache: Duplicate cookie detected
	[  +0.000826] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000994] FS-Cache: O-cookie d=00000000174c94b3{9p.inode} n=000000002de20fa9
	[  +0.001111] FS-Cache: O-key=[8] '613e5c0100000000'
	[  +0.000833] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000996] FS-Cache: N-cookie d=00000000174c94b3{9p.inode} n=00000000bc2d205d
	[  +0.001142] FS-Cache: N-key=[8] '613e5c0100000000'
	
	
	==> etcd [3d91f57195c2a3301694e12ef5429b5c4533a1555f186d0710618b7b852d2760] <==
	2024-03-11 13:37:16.983742 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:37:26.983655 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:37:36.983968 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:37:46.983728 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:37:56.983877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:38:06.983881 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:38:16.984058 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:38:26.983793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:38:36.983690 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:38:46.983976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:38:56.983844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:39:06.983901 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:39:16.983745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:39:26.983683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:39:36.983739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:39:46.983891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:39:56.983884 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:40:06.983839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:40:16.983996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:40:26.983868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:40:36.983882 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:40:46.983758 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:40:56.984346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:41:06.983725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:41:16.983802 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [5ce8ca658a489eed2dbd049d4848b82cf2f879b98316ff0a5e0f9b0ff9f2962e] <==
	raft2024/03/11 13:32:17 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/03/11 13:32:17 INFO: ea7e25599daad906 became leader at term 2
	raft2024/03/11 13:32:17 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-03-11 13:32:17.247787 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-11 13:32:17.249201 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-11 13:32:17.249398 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-11 13:32:17.249491 I | etcdserver: published {Name:old-k8s-version-070145 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-03-11 13:32:17.249732 I | embed: ready to serve client requests
	2024-03-11 13:32:17.251329 I | embed: serving client requests on 192.168.76.2:2379
	2024-03-11 13:32:17.260345 I | embed: ready to serve client requests
	2024-03-11 13:32:17.287385 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-11 13:32:43.231642 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:32:44.184218 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:32:54.184519 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:33:04.184494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:33:14.184490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:33:24.184382 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:33:34.184497 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:33:44.184320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:33:54.184477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:34:04.184558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:34:14.184378 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:34:24.184529 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:34:34.186790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-11 13:34:44.184388 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 13:41:25 up  5:23,  0 users,  load average: 2.00, 1.91, 2.46
	Linux old-k8s-version-070145 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [881db81636d17f287bdf2f1d72f8fbfef4724f8214d5f3b4d8d55fa6f7cce1c1] <==
	I0311 13:39:22.893476       1 main.go:227] handling current node
	I0311 13:39:32.902793       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:39:32.902821       1 main.go:227] handling current node
	I0311 13:39:42.926400       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:39:42.926428       1 main.go:227] handling current node
	I0311 13:39:52.939205       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:39:52.939231       1 main.go:227] handling current node
	I0311 13:40:02.954392       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:40:02.954419       1 main.go:227] handling current node
	I0311 13:40:12.970713       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:40:12.970748       1 main.go:227] handling current node
	I0311 13:40:22.983897       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:40:22.983924       1 main.go:227] handling current node
	I0311 13:40:32.991352       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:40:32.991381       1 main.go:227] handling current node
	I0311 13:40:43.006855       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:40:43.006886       1 main.go:227] handling current node
	I0311 13:40:53.011446       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:40:53.011478       1 main.go:227] handling current node
	I0311 13:41:03.027174       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:41:03.027206       1 main.go:227] handling current node
	I0311 13:41:13.049688       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:41:13.049722       1 main.go:227] handling current node
	I0311 13:41:23.063613       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:41:23.063647       1 main.go:227] handling current node
	
	
	==> kindnet [ee7a5ecb39413c27e23ab32527a89da9064b17f58353410a2e96d56978c5b575] <==
	podIP = 192.168.76.2
	I0311 13:32:42.825398       1 main.go:116] setting mtu 1500 for CNI 
	I0311 13:32:42.825412       1 main.go:146] kindnetd IP family: "ipv4"
	I0311 13:32:42.825423       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0311 13:33:13.055404       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0311 13:33:13.073993       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:33:13.074029       1 main.go:227] handling current node
	I0311 13:33:23.080995       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:33:23.081026       1 main.go:227] handling current node
	I0311 13:33:33.102602       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:33:33.102634       1 main.go:227] handling current node
	I0311 13:33:43.115591       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:33:43.115620       1 main.go:227] handling current node
	I0311 13:33:53.138067       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:33:53.138734       1 main.go:227] handling current node
	I0311 13:34:03.162051       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:34:03.162081       1 main.go:227] handling current node
	I0311 13:34:13.174839       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:34:13.174868       1 main.go:227] handling current node
	I0311 13:34:23.200967       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:34:23.201143       1 main.go:227] handling current node
	I0311 13:34:33.226722       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:34:33.230075       1 main.go:227] handling current node
	I0311 13:34:43.255513       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0311 13:34:43.256178       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1efd3d3c6b431a8f6d76e245d3ec7ab63c30f2b096c264acd026c38d4878a67f] <==
	I0311 13:32:23.945539       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0311 13:32:23.945754       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0311 13:32:24.499852       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 13:32:24.545447       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0311 13:32:24.645226       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0311 13:32:24.646583       1 controller.go:606] quota admission added evaluator for: endpoints
	I0311 13:32:24.651479       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 13:32:25.564247       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0311 13:32:26.091911       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0311 13:32:26.140099       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0311 13:32:34.658358       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 13:32:41.591248       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0311 13:32:41.790591       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0311 13:32:58.614128       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:32:58.614357       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:32:58.614373       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0311 13:33:37.722849       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:33:37.722891       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:33:37.722900       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0311 13:34:11.883006       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:34:11.883077       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:34:11.883087       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0311 13:34:45.236117       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:34:45.236173       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:34:45.236183       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [2db70787e2d49fdecd9bc02d04c1eb56827c3d542198aabfd91e6440e7bdedfb] <==
	I0311 13:37:36.554226       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:37:36.554263       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0311 13:38:21.089424       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:38:21.089880       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:38:21.089914       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0311 13:38:32.391011       1 handler_proxy.go:102] no RequestInfo found in the context
	E0311 13:38:32.391252       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 13:38:32.391268       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 13:39:05.003258       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:39:05.003339       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:39:05.003350       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0311 13:39:48.998420       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:39:48.998548       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:39:48.998560       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0311 13:40:27.948415       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:40:27.948461       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:40:27.948470       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0311 13:40:30.577419       1 handler_proxy.go:102] no RequestInfo found in the context
	E0311 13:40:30.577615       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 13:40:30.577633       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 13:41:12.177408       1 client.go:360] parsed scheme: "passthrough"
	I0311 13:41:12.177455       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0311 13:41:12.177463       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [2a8386dbbc357521701b6779041b13f7ac23bf6efa6e4648879ae34ab9456918] <==
	I0311 13:32:41.648133       1 event.go:291] "Event occurred" object="old-k8s-version-070145" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-070145 event: Registered Node old-k8s-version-070145 in Controller"
	I0311 13:32:41.656855       1 shared_informer.go:247] Caches are synced for TTL 
	I0311 13:32:41.675110       1 shared_informer.go:247] Caches are synced for node 
	I0311 13:32:41.675406       1 range_allocator.go:172] Starting range CIDR allocator
	I0311 13:32:41.676289       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0311 13:32:41.676400       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0311 13:32:41.685684       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0311 13:32:41.694171       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0311 13:32:41.725796       1 shared_informer.go:247] Caches are synced for attach detach 
	I0311 13:32:41.727629       1 shared_informer.go:247] Caches are synced for GC 
	I0311 13:32:41.734237       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4c948"
	I0311 13:32:41.779938       1 range_allocator.go:373] Set node old-k8s-version-070145 PodCIDR to [10.244.0.0/24]
	I0311 13:32:41.848401       1 shared_informer.go:247] Caches are synced for resource quota 
	I0311 13:32:41.869328       1 shared_informer.go:247] Caches are synced for resource quota 
	I0311 13:32:41.932010       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0311 13:32:41.978157       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6vcch"
	I0311 13:32:42.004134       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2ptss"
	E0311 13:32:42.061061       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c3e1eb0c-f6b8-4108-9bdc-a3583c9763dc", ResourceVersion:"280", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63845760746, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40017a6f40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40017a6f60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40017a6f80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017a6fa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017a6fc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017a6fe0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017a7000)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017a7040)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400161e9c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400104cd58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001c15e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000ff68)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400104cda0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0311 13:32:42.132240       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0311 13:32:42.224352       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0311 13:32:42.224380       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0311 13:32:43.131317       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0311 13:32:43.169411       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-7t849"
	I0311 13:32:46.644257       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0311 13:34:48.725027       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [bc4f384ce5455f87e2277b85ac8603215e4769c7db6b809c8d6638746a8f0f79] <==
	E0311 13:37:19.570676       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0311 13:37:27.127948       1 request.go:655] Throttling request took 1.048374088s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0311 13:37:27.979487       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0311 13:37:50.072607       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0311 13:37:59.630172       1 request.go:655] Throttling request took 1.048113255s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0311 13:38:00.563639       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0311 13:38:20.574448       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0311 13:38:32.214042       1 request.go:655] Throttling request took 1.048195008s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0311 13:38:33.065707       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0311 13:38:51.076373       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0311 13:39:04.716264       1 request.go:655] Throttling request took 1.048300433s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0311 13:39:05.567766       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0311 13:39:21.578342       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0311 13:39:37.218310       1 request.go:655] Throttling request took 1.047354569s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0311 13:39:38.070094       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0311 13:39:52.080452       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0311 13:40:09.720500       1 request.go:655] Throttling request took 1.048352601s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0311 13:40:10.571917       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0311 13:40:22.590446       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0311 13:40:42.222393       1 request.go:655] Throttling request took 1.048457112s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0311 13:40:43.073672       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0311 13:40:53.092380       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0311 13:41:14.724131       1 request.go:655] Throttling request took 1.048426497s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0311 13:41:15.575605       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0311 13:41:23.594450       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [1046f75cb6d0bdb95e2b7da1e6dbd5108dd2cc257606b15059e8eec97867afb5] <==
	I0311 13:35:31.829261       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0311 13:35:31.829331       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0311 13:35:31.855423       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0311 13:35:31.855698       1 server_others.go:185] Using iptables Proxier.
	I0311 13:35:31.859649       1 server.go:650] Version: v1.20.0
	I0311 13:35:31.863477       1 config.go:315] Starting service config controller
	I0311 13:35:31.863492       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0311 13:35:31.863542       1 config.go:224] Starting endpoint slice config controller
	I0311 13:35:31.863546       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0311 13:35:31.963673       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0311 13:35:31.963673       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [8334568a27c354d62138974eb15c381240354c69ff9cc3334aafb99945f69298] <==
	I0311 13:32:43.372315       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0311 13:32:43.372625       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0311 13:32:43.402743       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0311 13:32:43.403073       1 server_others.go:185] Using iptables Proxier.
	I0311 13:32:43.403641       1 server.go:650] Version: v1.20.0
	I0311 13:32:43.404403       1 config.go:315] Starting service config controller
	I0311 13:32:43.404468       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0311 13:32:43.404550       1 config.go:224] Starting endpoint slice config controller
	I0311 13:32:43.404600       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0311 13:32:43.504673       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0311 13:32:43.504770       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [18ba14631e2229013a95152ecc7bd700ff0dc021cd44274d5c5657ffb75ed347] <==
	I0311 13:32:23.134682       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 13:32:23.134695       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 13:32:23.134710       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0311 13:32:23.145146       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 13:32:23.169508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 13:32:23.169600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 13:32:23.169710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 13:32:23.169779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 13:32:23.169833       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 13:32:23.169958       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 13:32:23.170071       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 13:32:23.170189       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 13:32:23.170277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 13:32:23.170359       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 13:32:23.170446       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 13:32:24.030237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 13:32:24.091929       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 13:32:24.092538       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 13:32:24.104979       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 13:32:24.112028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 13:32:24.127421       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 13:32:24.228128       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 13:32:24.269986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 13:32:24.283661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0311 13:32:26.434837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [39152e7d8a96155b7cd9fa4c2075f12a084ad8469e78c5db3718c29114ccdec1] <==
	I0311 13:35:22.780150       1 serving.go:331] Generated self-signed cert in-memory
	W0311 13:35:29.339874       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 13:35:29.340125       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 13:35:29.340228       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 13:35:29.340305       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 13:35:29.470863       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0311 13:35:29.471185       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 13:35:29.471309       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 13:35:29.471330       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0311 13:35:29.571353       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 11 13:39:48 old-k8s-version-070145 kubelet[664]: E0311 13:39:48.916029     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:39:56 old-k8s-version-070145 kubelet[664]: I0311 13:39:56.915981     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7
	Mar 11 13:39:56 old-k8s-version-070145 kubelet[664]: E0311 13:39:56.916329     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	Mar 11 13:39:59 old-k8s-version-070145 kubelet[664]: E0311 13:39:59.916223     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:40:11 old-k8s-version-070145 kubelet[664]: I0311 13:40:11.914903     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7
	Mar 11 13:40:11 old-k8s-version-070145 kubelet[664]: E0311 13:40:11.915247     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	Mar 11 13:40:11 old-k8s-version-070145 kubelet[664]: E0311 13:40:11.916186     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:40:22 old-k8s-version-070145 kubelet[664]: E0311 13:40:22.921762     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:40:26 old-k8s-version-070145 kubelet[664]: I0311 13:40:26.914944     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7
	Mar 11 13:40:26 old-k8s-version-070145 kubelet[664]: E0311 13:40:26.915380     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	Mar 11 13:40:36 old-k8s-version-070145 kubelet[664]: E0311 13:40:36.915484     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: I0311 13:40:39.914791     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7
	Mar 11 13:40:39 old-k8s-version-070145 kubelet[664]: E0311 13:40:39.915140     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	Mar 11 13:40:50 old-k8s-version-070145 kubelet[664]: E0311 13:40:50.915444     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: I0311 13:40:53.914833     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7
	Mar 11 13:40:53 old-k8s-version-070145 kubelet[664]: E0311 13:40:53.915252     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	Mar 11 13:41:04 old-k8s-version-070145 kubelet[664]: E0311 13:41:04.934973     664 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 11 13:41:04 old-k8s-version-070145 kubelet[664]: E0311 13:41:04.935405     664 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 11 13:41:04 old-k8s-version-070145 kubelet[664]: E0311 13:41:04.935646     664 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-f2vrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-fjvd8_kube-system(97a60e3
d-c157-4827-82c5-590c454ae52c): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 11 13:41:04 old-k8s-version-070145 kubelet[664]: E0311 13:41:04.935691     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 11 13:41:07 old-k8s-version-070145 kubelet[664]: I0311 13:41:07.914808     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7
	Mar 11 13:41:07 old-k8s-version-070145 kubelet[664]: E0311 13:41:07.915147     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	Mar 11 13:41:16 old-k8s-version-070145 kubelet[664]: E0311 13:41:16.917268     664 pod_workers.go:191] Error syncing pod 97a60e3d-c157-4827-82c5-590c454ae52c ("metrics-server-9975d5f86-fjvd8_kube-system(97a60e3d-c157-4827-82c5-590c454ae52c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 11 13:41:19 old-k8s-version-070145 kubelet[664]: I0311 13:41:19.914771     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6159094736fcfb1fe1c55f422706a16266ba9ff346982d996c31508dc0cca2c7
	Mar 11 13:41:19 old-k8s-version-070145 kubelet[664]: E0311 13:41:19.915149     664 pod_workers.go:191] Error syncing pod cd398c70-ed6f-4abf-9b68-5612dc691d46 ("dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-trflx_kubernetes-dashboard(cd398c70-ed6f-4abf-9b68-5612dc691d46)"
	
	
	==> kubernetes-dashboard [bc98146e508c71c02a0b452cdb210d18cca3ee22c41165cc797f9e83833c8ad9] <==
	2024/03/11 13:35:52 Using namespace: kubernetes-dashboard
	2024/03/11 13:35:52 Using in-cluster config to connect to apiserver
	2024/03/11 13:35:52 Using secret token for csrf signing
	2024/03/11 13:35:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/11 13:35:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/11 13:35:52 Successful initial request to the apiserver, version: v1.20.0
	2024/03/11 13:35:52 Generating JWE encryption key
	2024/03/11 13:35:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/11 13:35:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/11 13:35:52 Initializing JWE encryption key from synchronized object
	2024/03/11 13:35:52 Creating in-cluster Sidecar client
	2024/03/11 13:35:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:35:52 Serving insecurely on HTTP port: 9090
	2024/03/11 13:36:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:36:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:37:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:37:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:38:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:38:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:39:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:39:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:40:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:40:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:41:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/11 13:35:52 Starting overwatch
	
	
	==> storage-provisioner [91683919c3a681de6a823dc15c85345c16650c12b9aa001e49a672a08b900503] <==
	I0311 13:36:16.149871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 13:36:16.170065       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 13:36:16.170123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 13:36:33.657069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 13:36:33.657626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b9be581c-2ea0-4e83-8265-182464cee089", APIVersion:"v1", ResourceVersion:"835", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-070145_560b4b64-5f42-4a6b-b516-b143d43b16ba became leader
	I0311 13:36:33.658813       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-070145_560b4b64-5f42-4a6b-b516-b143d43b16ba!
	I0311 13:36:33.760093       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-070145_560b4b64-5f42-4a6b-b516-b143d43b16ba!
	
	
	==> storage-provisioner [96c1af2a12c5afb2a60e4b1487ac66e32236bd20e9cd7c3d75b01e38e1b0e6d2] <==
	I0311 13:35:31.086836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0311 13:36:01.094454       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-070145 -n old-k8s-version-070145
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-070145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-fjvd8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-070145 describe pod metrics-server-9975d5f86-fjvd8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-070145 describe pod metrics-server-9975d5f86-fjvd8: exit status 1 (131.179283ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-fjvd8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-070145 describe pod metrics-server-9975d5f86-fjvd8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (385.27s)

                                                
                                    

Test pass (296/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.3
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 8.62
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 8.76
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.22
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 142.09
38 TestAddons/parallel/Registry 15.44
40 TestAddons/parallel/InspektorGadget 10.79
41 TestAddons/parallel/MetricsServer 5.92
45 TestAddons/parallel/Headlamp 11.58
46 TestAddons/parallel/CloudSpanner 5.58
47 TestAddons/parallel/LocalPath 52.64
48 TestAddons/parallel/NvidiaDevicePlugin 5.56
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.35
54 TestCertOptions 37.77
55 TestCertExpiration 232.6
57 TestForceSystemdFlag 42.14
58 TestForceSystemdEnv 46.92
59 TestDockerEnvContainerd 45.97
64 TestErrorSpam/setup 31.09
65 TestErrorSpam/start 0.76
66 TestErrorSpam/status 1
67 TestErrorSpam/pause 1.7
68 TestErrorSpam/unpause 1.83
69 TestErrorSpam/stop 1.47
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 58.43
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.29
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.95
81 TestFunctional/serial/CacheCmd/cache/add_local 1.46
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 44.87
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 2.05
92 TestFunctional/serial/LogsFileCmd 1.71
93 TestFunctional/serial/InvalidService 4.36
95 TestFunctional/parallel/ConfigCmd 0.57
96 TestFunctional/parallel/DashboardCmd 9.16
97 TestFunctional/parallel/DryRun 0.48
98 TestFunctional/parallel/InternationalLanguage 0.25
99 TestFunctional/parallel/StatusCmd 1.16
103 TestFunctional/parallel/ServiceCmdConnect 11.7
104 TestFunctional/parallel/AddonsCmd 0.2
105 TestFunctional/parallel/PersistentVolumeClaim 26.16
107 TestFunctional/parallel/SSHCmd 0.72
108 TestFunctional/parallel/CpCmd 2.52
110 TestFunctional/parallel/FileSync 0.3
111 TestFunctional/parallel/CertSync 2.07
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
119 TestFunctional/parallel/License 0.3
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.24
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
133 TestFunctional/parallel/ProfileCmd/profile_list 0.44
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
135 TestFunctional/parallel/MountCmd/any-port 7.47
136 TestFunctional/parallel/ServiceCmd/List 0.59
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
139 TestFunctional/parallel/ServiceCmd/Format 0.44
140 TestFunctional/parallel/ServiceCmd/URL 0.39
141 TestFunctional/parallel/MountCmd/specific-port 1.91
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.53
143 TestFunctional/parallel/Version/short 0.09
144 TestFunctional/parallel/Version/components 1.32
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.75
150 TestFunctional/parallel/ImageCommands/Setup 2.37
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.65
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMutliControlPlane/serial/StartCluster 135.99
168 TestMutliControlPlane/serial/DeployApp 22.19
169 TestMutliControlPlane/serial/PingHostFromPods 1.78
170 TestMutliControlPlane/serial/AddWorkerNode 27.51
171 TestMutliControlPlane/serial/NodeLabels 0.12
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.79
173 TestMutliControlPlane/serial/CopyFile 20.06
174 TestMutliControlPlane/serial/StopSecondaryNode 12.97
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
176 TestMutliControlPlane/serial/RestartSecondaryNode 18.89
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 128.4
179 TestMutliControlPlane/serial/DeleteSecondaryNode 11.7
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
181 TestMutliControlPlane/serial/StopCluster 36.18
182 TestMutliControlPlane/serial/RestartCluster 60.41
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.54
184 TestMutliControlPlane/serial/AddSecondaryNode 48.86
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
189 TestJSONOutput/start/Command 57.14
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.73
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.68
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.79
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.26
214 TestKicCustomNetwork/create_custom_network 43.67
215 TestKicCustomNetwork/use_default_bridge_network 35.6
216 TestKicExistingNetwork 37.5
217 TestKicCustomSubnet 36.23
218 TestKicStaticIP 33.89
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 67.32
223 TestMountStart/serial/StartWithMountFirst 6.32
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 6.45
226 TestMountStart/serial/VerifyMountSecond 0.27
227 TestMountStart/serial/DeleteFirst 1.68
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 7.6
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 80.46
235 TestMultiNode/serial/DeployApp2Nodes 11.28
236 TestMultiNode/serial/PingHostFrom2Pods 1.1
237 TestMultiNode/serial/AddNode 17
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.35
240 TestMultiNode/serial/CopyFile 10.49
241 TestMultiNode/serial/StopNode 2.27
242 TestMultiNode/serial/StartAfterStop 9.17
243 TestMultiNode/serial/RestartKeepsNodes 129.35
244 TestMultiNode/serial/DeleteNode 5.72
245 TestMultiNode/serial/StopMultiNode 24.02
246 TestMultiNode/serial/RestartMultiNode 50.12
247 TestMultiNode/serial/ValidateNameConflict 34.7
252 TestPreload 105.22
254 TestScheduledStopUnix 105.3
257 TestInsufficientStorage 12.92
258 TestRunningBinaryUpgrade 93.2
260 TestKubernetesUpgrade 394.74
261 TestMissingContainerUpgrade 162.31
263 TestPause/serial/Start 68.67
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
266 TestNoKubernetes/serial/StartWithK8s 42.13
267 TestNoKubernetes/serial/StartWithStopK8s 16.86
268 TestNoKubernetes/serial/Start 5.62
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
270 TestNoKubernetes/serial/ProfileList 1.03
271 TestNoKubernetes/serial/Stop 1.27
272 TestNoKubernetes/serial/StartNoArgs 6.73
273 TestPause/serial/SecondStartNoReconfiguration 6.45
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
275 TestPause/serial/Pause 0.85
276 TestPause/serial/VerifyStatus 0.33
277 TestPause/serial/Unpause 0.86
278 TestPause/serial/PauseAgain 1.12
279 TestPause/serial/DeletePaused 3.18
280 TestPause/serial/VerifyDeletedResources 0.18
281 TestStoppedBinaryUpgrade/Setup 1.15
282 TestStoppedBinaryUpgrade/Upgrade 130.88
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.45
298 TestNetworkPlugins/group/false 5.24
303 TestStartStop/group/old-k8s-version/serial/FirstStart 177.46
305 TestStartStop/group/no-preload/serial/FirstStart 78.51
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.59
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.38
308 TestStartStop/group/old-k8s-version/serial/Stop 12.55
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
311 TestStartStop/group/no-preload/serial/DeployApp 8.48
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.64
313 TestStartStop/group/no-preload/serial/Stop 12.26
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
315 TestStartStop/group/no-preload/serial/SecondStart 289.23
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
319 TestStartStop/group/no-preload/serial/Pause 3.97
321 TestStartStop/group/embed-certs/serial/FirstStart 63.24
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
325 TestStartStop/group/old-k8s-version/serial/Pause 4.26
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.19
328 TestStartStop/group/embed-certs/serial/DeployApp 8.46
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
330 TestStartStop/group/embed-certs/serial/Stop 12.19
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
332 TestStartStop/group/embed-certs/serial/SecondStart 267.59
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.06
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.73
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.13
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
341 TestStartStop/group/embed-certs/serial/Pause 3.28
343 TestStartStop/group/newest-cni/serial/FirstStart 49.14
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.12
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.3
348 TestNetworkPlugins/group/auto/Start 70.41
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.68
351 TestStartStop/group/newest-cni/serial/Stop 1.39
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
353 TestStartStop/group/newest-cni/serial/SecondStart 22.31
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
357 TestStartStop/group/newest-cni/serial/Pause 4.15
358 TestNetworkPlugins/group/kindnet/Start 58.59
359 TestNetworkPlugins/group/auto/KubeletFlags 0.38
360 TestNetworkPlugins/group/auto/NetCatPod 12.4
361 TestNetworkPlugins/group/auto/DNS 0.19
362 TestNetworkPlugins/group/auto/Localhost 0.17
363 TestNetworkPlugins/group/auto/HairPin 0.19
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/Start 81.6
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
367 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
368 TestNetworkPlugins/group/kindnet/DNS 0.25
369 TestNetworkPlugins/group/kindnet/Localhost 0.19
370 TestNetworkPlugins/group/kindnet/HairPin 0.19
371 TestNetworkPlugins/group/custom-flannel/Start 64.52
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.47
374 TestNetworkPlugins/group/calico/NetCatPod 10.42
375 TestNetworkPlugins/group/calico/DNS 0.2
376 TestNetworkPlugins/group/calico/Localhost 0.17
377 TestNetworkPlugins/group/calico/HairPin 0.18
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.47
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
380 TestNetworkPlugins/group/custom-flannel/DNS 0.3
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
383 TestNetworkPlugins/group/enable-default-cni/Start 90.37
384 TestNetworkPlugins/group/flannel/Start 64.59
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
389 TestNetworkPlugins/group/flannel/NetCatPod 11.34
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
393 TestNetworkPlugins/group/flannel/DNS 0.19
394 TestNetworkPlugins/group/flannel/Localhost 0.19
395 TestNetworkPlugins/group/flannel/HairPin 0.16
396 TestNetworkPlugins/group/bridge/Start 57.39
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
398 TestNetworkPlugins/group/bridge/NetCatPod 10.26
399 TestNetworkPlugins/group/bridge/DNS 0.19
400 TestNetworkPlugins/group/bridge/Localhost 0.15
401 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (8.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-568522 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-568522 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.303063438s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-568522
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-568522: exit status 85 (89.748975ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-568522 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |          |
	|         | -p download-only-568522        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:47:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:47:01.755321  746485 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:47:01.755510  746485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:01.755521  746485 out.go:304] Setting ErrFile to fd 2...
	I0311 12:47:01.755526  746485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:01.755771  746485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	W0311 12:47:01.755917  746485 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18350-741028/.minikube/config/config.json: open /home/jenkins/minikube-integration/18350-741028/.minikube/config/config.json: no such file or directory
	I0311 12:47:01.756296  746485 out.go:298] Setting JSON to true
	I0311 12:47:01.757207  746485 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16166,"bootTime":1710145056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 12:47:01.757275  746485 start.go:139] virtualization:  
	I0311 12:47:01.760881  746485 out.go:97] [download-only-568522] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:47:01.763038  746485 out.go:169] MINIKUBE_LOCATION=18350
	W0311 12:47:01.761132  746485 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball: no such file or directory
	I0311 12:47:01.761179  746485 notify.go:220] Checking for updates...
	I0311 12:47:01.765194  746485 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:47:01.767698  746485 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:47:01.769599  746485 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 12:47:01.771401  746485 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 12:47:01.775532  746485 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 12:47:01.775787  746485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:47:01.797412  746485 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:47:01.797517  746485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:01.864576  746485 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 12:47:01.855306626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:01.864691  746485 docker.go:295] overlay module found
	I0311 12:47:01.866845  746485 out.go:97] Using the docker driver based on user configuration
	I0311 12:47:01.866871  746485 start.go:297] selected driver: docker
	I0311 12:47:01.866885  746485 start.go:901] validating driver "docker" against <nil>
	I0311 12:47:01.867007  746485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:01.920888  746485 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 12:47:01.911594861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:01.921065  746485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:47:01.921367  746485 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 12:47:01.921528  746485 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 12:47:01.923742  746485 out.go:169] Using Docker driver with root privileges
	I0311 12:47:01.925537  746485 cni.go:84] Creating CNI manager for ""
	I0311 12:47:01.925559  746485 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:47:01.925573  746485 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:47:01.925668  746485 start.go:340] cluster config:
	{Name:download-only-568522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-568522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:47:01.927559  746485 out.go:97] Starting "download-only-568522" primary control-plane node in "download-only-568522" cluster
	I0311 12:47:01.927580  746485 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 12:47:01.929525  746485 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:47:01.929568  746485 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0311 12:47:01.929738  746485 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:47:01.948485  746485 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:47:01.949200  746485 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:47:01.949311  746485 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:47:01.995228  746485 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0311 12:47:01.995253  746485 cache.go:56] Caching tarball of preloaded images
	I0311 12:47:01.995991  746485 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0311 12:47:01.998807  746485 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0311 12:47:01.998851  746485 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0311 12:47:02.113986  746485 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-568522 host does not exist
	  To start a cluster, run: "minikube start -p download-only-568522"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-568522
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (8.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-228434 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-228434 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.620135066s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (8.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-228434
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-228434: exit status 85 (82.997796ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-568522 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | -p download-only-568522        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-568522        | download-only-568522 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | -o=json --download-only        | download-only-228434 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | -p download-only-228434        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:47:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:47:10.505973  746647 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:47:10.506174  746647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:10.506205  746647 out.go:304] Setting ErrFile to fd 2...
	I0311 12:47:10.506226  746647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:10.506480  746647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 12:47:10.506950  746647 out.go:298] Setting JSON to true
	I0311 12:47:10.507831  746647 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16175,"bootTime":1710145056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 12:47:10.507931  746647 start.go:139] virtualization:  
	I0311 12:47:10.510678  746647 out.go:97] [download-only-228434] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:47:10.513081  746647 out.go:169] MINIKUBE_LOCATION=18350
	I0311 12:47:10.510899  746647 notify.go:220] Checking for updates...
	I0311 12:47:10.516444  746647 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:47:10.518569  746647 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:47:10.520502  746647 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 12:47:10.522576  746647 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 12:47:10.525951  746647 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 12:47:10.526225  746647 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:47:10.546315  746647 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:47:10.546415  746647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:10.617380  746647 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-11 12:47:10.607749625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:10.617507  746647 docker.go:295] overlay module found
	I0311 12:47:10.619696  746647 out.go:97] Using the docker driver based on user configuration
	I0311 12:47:10.619721  746647 start.go:297] selected driver: docker
	I0311 12:47:10.619727  746647 start.go:901] validating driver "docker" against <nil>
	I0311 12:47:10.619838  746647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:10.677178  746647 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-11 12:47:10.667800708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:10.677351  746647 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:47:10.677644  746647 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 12:47:10.677797  746647 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 12:47:10.679978  746647 out.go:169] Using Docker driver with root privileges
	I0311 12:47:10.681694  746647 cni.go:84] Creating CNI manager for ""
	I0311 12:47:10.681724  746647 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:47:10.681735  746647 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:47:10.681815  746647 start.go:340] cluster config:
	{Name:download-only-228434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-228434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:47:10.683504  746647 out.go:97] Starting "download-only-228434" primary control-plane node in "download-only-228434" cluster
	I0311 12:47:10.683543  746647 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 12:47:10.685432  746647 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:47:10.685467  746647 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 12:47:10.685568  746647 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:47:10.699559  746647 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:47:10.699690  746647 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:47:10.699709  746647 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 12:47:10.699714  746647 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 12:47:10.699721  746647 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 12:47:10.745950  746647 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0311 12:47:10.745974  746647 cache.go:56] Caching tarball of preloaded images
	I0311 12:47:10.746153  746647 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 12:47:10.747963  746647 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0311 12:47:10.747994  746647 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0311 12:47:10.853688  746647 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0311 12:47:15.195004  746647 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0311 12:47:15.195150  746647 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-228434 host does not exist
	  To start a cluster, run: "minikube start -p download-only-228434"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-228434
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (8.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-628520 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-628520 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.755521885s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (8.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-628520
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-628520: exit status 85 (85.243657ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-568522 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | -p download-only-568522           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-568522           | download-only-568522 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | -o=json --download-only           | download-only-228434 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | -p download-only-228434           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| delete  | -p download-only-228434           | download-only-228434 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC | 11 Mar 24 12:47 UTC |
	| start   | -o=json --download-only           | download-only-628520 | jenkins | v1.32.0 | 11 Mar 24 12:47 UTC |                     |
	|         | -p download-only-628520           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:47:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:47:19.567236  746809 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:47:19.567412  746809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:19.567426  746809 out.go:304] Setting ErrFile to fd 2...
	I0311 12:47:19.567430  746809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:47:19.567702  746809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 12:47:19.568191  746809 out.go:298] Setting JSON to true
	I0311 12:47:19.569148  746809 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16184,"bootTime":1710145056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 12:47:19.569227  746809 start.go:139] virtualization:  
	I0311 12:47:19.571810  746809 out.go:97] [download-only-628520] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:47:19.573858  746809 out.go:169] MINIKUBE_LOCATION=18350
	I0311 12:47:19.572058  746809 notify.go:220] Checking for updates...
	I0311 12:47:19.577493  746809 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:47:19.579379  746809 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:47:19.581280  746809 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 12:47:19.583372  746809 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 12:47:19.587282  746809 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 12:47:19.587577  746809 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:47:19.610612  746809 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:47:19.610721  746809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:19.678087  746809 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:47:19.668624461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:19.678190  746809 docker.go:295] overlay module found
	I0311 12:47:19.680275  746809 out.go:97] Using the docker driver based on user configuration
	I0311 12:47:19.680301  746809 start.go:297] selected driver: docker
	I0311 12:47:19.680307  746809 start.go:901] validating driver "docker" against <nil>
	I0311 12:47:19.680427  746809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:47:19.733748  746809 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:47:19.725013814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:47:19.733932  746809 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:47:19.734270  746809 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 12:47:19.734430  746809 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 12:47:19.736540  746809 out.go:169] Using Docker driver with root privileges
	I0311 12:47:19.738544  746809 cni.go:84] Creating CNI manager for ""
	I0311 12:47:19.738568  746809 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 12:47:19.738579  746809 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:47:19.738657  746809 start.go:340] cluster config:
	{Name:download-only-628520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-628520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0311 12:47:19.740979  746809 out.go:97] Starting "download-only-628520" primary control-plane node in "download-only-628520" cluster
	I0311 12:47:19.741007  746809 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 12:47:19.743509  746809 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:47:19.743535  746809 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0311 12:47:19.743708  746809 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:47:19.758518  746809 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:47:19.758633  746809 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:47:19.758658  746809 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 12:47:19.758680  746809 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 12:47:19.758689  746809 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 12:47:19.807283  746809 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0311 12:47:19.807311  746809 cache.go:56] Caching tarball of preloaded images
	I0311 12:47:19.807471  746809 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0311 12:47:19.809633  746809 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0311 12:47:19.809655  746809 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0311 12:47:19.922424  746809 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18350-741028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-628520 host does not exist
	  To start a cluster, run: "minikube start -p download-only-628520"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-628520
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-995452 --alsologtostderr --binary-mirror http://127.0.0.1:34573 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-995452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-995452
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-109866
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-109866: exit status 85 (80.303963ms)

                                                
                                                
-- stdout --
	* Profile "addons-109866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-109866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-109866
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-109866: exit status 85 (94.044251ms)

                                                
                                                
-- stdout --
	* Profile "addons-109866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-109866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (142.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-109866 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-109866 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m22.092198529s)
--- PASS: TestAddons/Setup (142.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 51.550589ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-htvdt" [a674420f-29d1-47aa-96b0-e37549d4e224] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004999657s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t89kt" [b429a937-8de5-46fc-885a-51a33440731e] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004799893s
addons_test.go:340: (dbg) Run:  kubectl --context addons-109866 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-109866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-109866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.272634963s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 ip
2024/03/11 12:50:07 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ggm8x" [28aad64b-423a-4b2f-b121-432c89475745] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004422658s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-109866
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-109866: (5.784498347s)
--- PASS: TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.844438ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-vx8dw" [e82eba82-b9ef-4607-9492-6a41d0ca5885] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004845668s
addons_test.go:415: (dbg) Run:  kubectl --context addons-109866 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-109866 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-109866 --alsologtostderr -v=1: (1.570916669s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-jgglr" [a039f4dd-afee-4b14-a731-eefd26cdfa62] Pending
helpers_test.go:344: "headlamp-5485c556b-jgglr" [a039f4dd-afee-4b14-a731-eefd26cdfa62] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-jgglr" [a039f4dd-afee-4b14-a731-eefd26cdfa62] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004442956s
--- PASS: TestAddons/parallel/Headlamp (11.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-kt592" [2b1af88f-079d-4930-a0a4-cbd95e053a2e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003942298s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-109866
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-109866 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-109866 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109866 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3b8d90a7-fb7c-4d1b-8f22-6da9a2d7d3a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3b8d90a7-fb7c-4d1b-8f22-6da9a2d7d3a0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3b8d90a7-fb7c-4d1b-8f22-6da9a2d7d3a0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003965882s
addons_test.go:891: (dbg) Run:  kubectl --context addons-109866 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 ssh "cat /opt/local-path-provisioner/pvc-28835a80-bbb1-42b9-a246-925c8b10c615_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-109866 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-109866 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-109866 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-109866 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.426554895s)
--- PASS: TestAddons/parallel/LocalPath (52.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jd445" [6386f2cb-771c-4f32-9490-ef0becc98007] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00783001s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-109866
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-cg9cw" [85e49ade-77d9-4e8e-856d-63ef085a3b36] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005346145s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-109866 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-109866 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-109866
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-109866: (12.026402672s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-109866
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-109866
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-109866
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (37.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-508405 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0311 13:31:09.624845  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-508405 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.031680474s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-508405 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-508405 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-508405 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-508405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-508405
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-508405: (1.982670466s)
--- PASS: TestCertOptions (37.77s)

                                                
                                    
x
+
TestCertExpiration (232.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-442518 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-442518 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.098483949s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-442518 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-442518 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.151553348s)
helpers_test.go:175: Cleaning up "cert-expiration-442518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-442518
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-442518: (2.348591424s)
--- PASS: TestCertExpiration (232.60s)

                                                
                                    
x
+
TestForceSystemdFlag (42.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-832517 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0311 13:29:52.582000  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-832517 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.649437369s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-832517 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-832517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-832517
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-832517: (2.118582111s)
--- PASS: TestForceSystemdFlag (42.14s)

                                                
                                    
x
+
TestForceSystemdEnv (46.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-445766 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-445766 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.233596576s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-445766 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-445766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-445766
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-445766: (2.273023107s)
--- PASS: TestForceSystemdEnv (46.92s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.97s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-635589 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-635589 --driver=docker  --container-runtime=containerd: (30.052696162s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-635589"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-635589": (1.255652378s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-oZPdJJDisrDI/agent.763739" SSH_AGENT_PID="763740" DOCKER_HOST=ssh://docker@127.0.0.1:33748 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-oZPdJJDisrDI/agent.763739" SSH_AGENT_PID="763740" DOCKER_HOST=ssh://docker@127.0.0.1:33748 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-oZPdJJDisrDI/agent.763739" SSH_AGENT_PID="763740" DOCKER_HOST=ssh://docker@127.0.0.1:33748 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.255338332s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-oZPdJJDisrDI/agent.763739" SSH_AGENT_PID="763740" DOCKER_HOST=ssh://docker@127.0.0.1:33748 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-635589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-635589
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-635589: (1.947664141s)
--- PASS: TestDockerEnvContainerd (45.97s)

                                                
                                    
x
+
TestErrorSpam/setup (31.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-799627 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-799627 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-799627 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-799627 --driver=docker  --container-runtime=containerd: (31.0807133s)
--- PASS: TestErrorSpam/setup (31.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 stop: (1.259192533s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-799627 --log_dir /tmp/nospam-799627 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18350-741028/.minikube/files/etc/test/nested/copy/746480/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-891062 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0311 12:54:52.582527  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:52.588309  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:52.598636  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:52.619121  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:52.659399  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:52.739722  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:52.900103  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:53.220669  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:53.861570  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:55.141969  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 12:54:57.703717  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-891062 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (58.431757285s)
--- PASS: TestFunctional/serial/StartWithProxy (58.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-891062 --alsologtostderr -v=8
E0311 12:55:02.823956  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-891062 --alsologtostderr -v=8: (6.283884496s)
functional_test.go:659: soft start took 6.287609397s for "functional-891062" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-891062 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 cache add registry.k8s.io/pause:3.1: (1.448919298s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 cache add registry.k8s.io/pause:3.3: (1.267487397s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 cache add registry.k8s.io/pause:latest: (1.229889994s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-891062 /tmp/TestFunctionalserialCacheCmdcacheadd_local2618123734/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cache add minikube-local-cache-test:functional-891062
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cache delete minikube-local-cache-test:functional-891062
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-891062
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.961658ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cache reload
E0311 12:55:13.064472  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 cache reload: (1.151288878s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 kubectl -- --context functional-891062 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-891062 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-891062 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0311 12:55:33.544662  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-891062 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.86954173s)
functional_test.go:757: restart took 44.869645048s for "functional-891062" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-891062 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 logs: (2.049042784s)
--- PASS: TestFunctional/serial/LogsCmd (2.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 logs --file /tmp/TestFunctionalserialLogsFileCmd3276880219/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 logs --file /tmp/TestFunctionalserialLogsFileCmd3276880219/001/logs.txt: (1.712578855s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-891062 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-891062
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-891062: exit status 115 (405.428209ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31454 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-891062 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 config get cpus: exit status 14 (90.334433ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 config get cpus: exit status 14 (94.35987ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-891062 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-891062 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 777921: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-891062 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-891062 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (222.363079ms)

                                                
                                                
-- stdout --
	* [functional-891062] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 12:56:42.365806  777541 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:56:42.366007  777541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:56:42.366044  777541 out.go:304] Setting ErrFile to fd 2...
	I0311 12:56:42.366066  777541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:56:42.366408  777541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 12:56:42.366883  777541 out.go:298] Setting JSON to false
	I0311 12:56:42.367930  777541 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16747,"bootTime":1710145056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 12:56:42.368062  777541 start.go:139] virtualization:  
	I0311 12:56:42.371036  777541 out.go:177] * [functional-891062] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:56:42.374407  777541 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 12:56:42.376883  777541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:56:42.374490  777541 notify.go:220] Checking for updates...
	I0311 12:56:42.381772  777541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:56:42.384138  777541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 12:56:42.387154  777541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 12:56:42.389714  777541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 12:56:42.392630  777541 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:56:42.393305  777541 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:56:42.416971  777541 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:56:42.417090  777541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:56:42.494885  777541 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-11 12:56:42.483944109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:56:42.495018  777541 docker.go:295] overlay module found
	I0311 12:56:42.498651  777541 out.go:177] * Using the docker driver based on existing profile
	I0311 12:56:42.500900  777541 start.go:297] selected driver: docker
	I0311 12:56:42.500923  777541 start.go:901] validating driver "docker" against &{Name:functional-891062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-891062 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:56:42.501052  777541 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 12:56:42.504722  777541 out.go:177] 
	W0311 12:56:42.507477  777541 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0311 12:56:42.509698  777541 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-891062 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-891062 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-891062 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (245.76768ms)

                                                
                                                
-- stdout --
	* [functional-891062] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 12:56:42.118629  777501 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:56:42.119027  777501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:56:42.119045  777501 out.go:304] Setting ErrFile to fd 2...
	I0311 12:56:42.119052  777501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:56:42.119771  777501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 12:56:42.121372  777501 out.go:298] Setting JSON to false
	I0311 12:56:42.122830  777501 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16747,"bootTime":1710145056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 12:56:42.122992  777501 start.go:139] virtualization:  
	I0311 12:56:42.126298  777501 out.go:177] * [functional-891062] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0311 12:56:42.129385  777501 notify.go:220] Checking for updates...
	I0311 12:56:42.133822  777501 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 12:56:42.139121  777501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:56:42.141961  777501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 12:56:42.145763  777501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 12:56:42.148727  777501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 12:56:42.151125  777501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 12:56:42.155244  777501 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 12:56:42.155927  777501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:56:42.199319  777501 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:56:42.199460  777501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:56:42.271431  777501 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-11 12:56:42.256644467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:56:42.271553  777501 docker.go:295] overlay module found
	I0311 12:56:42.274378  777501 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0311 12:56:42.277102  777501 start.go:297] selected driver: docker
	I0311 12:56:42.277130  777501 start.go:901] validating driver "docker" against &{Name:functional-891062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-891062 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:56:42.277271  777501 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 12:56:42.280272  777501 out.go:177] 
	W0311 12:56:42.283163  777501 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0311 12:56:42.285188  777501 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-891062 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-891062 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-t9gxq" [4588150c-57c0-4e98-a3df-1ca9a22e2f76] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-t9gxq" [4588150c-57c0-4e98-a3df-1ca9a22e2f76] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003670809s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32205
functional_test.go:1671: http://192.168.49.2:32205: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-t9gxq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32205
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e2ca94ee-65d1-44d8-b051-c006b2e69dd1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006494455s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-891062 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-891062 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-891062 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-891062 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0b5eee39-270d-4ba7-80b8-2b32b8825fc8] Pending
helpers_test.go:344: "sp-pod" [0b5eee39-270d-4ba7-80b8-2b32b8825fc8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0b5eee39-270d-4ba7-80b8-2b32b8825fc8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003645384s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-891062 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-891062 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-891062 delete -f testdata/storage-provisioner/pod.yaml: (1.160021727s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-891062 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ee91fd5d-3b0c-4dd3-8b9e-3448bbcaf669] Pending
helpers_test.go:344: "sp-pod" [ee91fd5d-3b0c-4dd3-8b9e-3448bbcaf669] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ee91fd5d-3b0c-4dd3-8b9e-3448bbcaf669] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004220077s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-891062 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh -n functional-891062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cp functional-891062:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2307158763/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh -n functional-891062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh -n functional-891062 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/746480/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo cat /etc/test/nested/copy/746480/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/746480.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo cat /etc/ssl/certs/746480.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/746480.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo cat /usr/share/ca-certificates/746480.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/7464802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo cat /etc/ssl/certs/7464802.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/7464802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo cat /usr/share/ca-certificates/7464802.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-891062 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 ssh "sudo systemctl is-active docker": exit status 1 (359.058679ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 ssh "sudo systemctl is-active crio": exit status 1 (351.177512ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-891062 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-891062 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-891062 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-891062 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 775301: os: process already finished
helpers_test.go:502: unable to terminate pid 775141: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-891062 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-891062 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7ce190ca-8958-4ed4-8cfd-a3694fea09ef] Pending
helpers_test.go:344: "nginx-svc" [7ce190ca-8958-4ed4-8cfd-a3694fea09ef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7ce190ca-8958-4ed4-8cfd-a3694fea09ef] Running
E0311 12:56:14.505245  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.0055202s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-891062 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.115.53 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-891062 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-891062 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-891062 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-q8cc8" [0d9a0772-a5a9-4297-8107-2a68246714dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-q8cc8" [0d9a0772-a5a9-4297-8107-2a68246714dc] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004241115s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "372.933513ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "65.433983ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "348.784307ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "60.735081ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdany-port4262894509/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710161797677339601" to /tmp/TestFunctionalparallelMountCmdany-port4262894509/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710161797677339601" to /tmp/TestFunctionalparallelMountCmdany-port4262894509/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710161797677339601" to /tmp/TestFunctionalparallelMountCmdany-port4262894509/001/test-1710161797677339601
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.738776ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 11 12:56 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 11 12:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 11 12:56 test-1710161797677339601
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh cat /mount-9p/test-1710161797677339601
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-891062 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8b00f9cd-45f2-4e60-9f1f-a4e05e6450b6] Pending
helpers_test.go:344: "busybox-mount" [8b00f9cd-45f2-4e60-9f1f-a4e05e6450b6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8b00f9cd-45f2-4e60-9f1f-a4e05e6450b6] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8b00f9cd-45f2-4e60-9f1f-a4e05e6450b6] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004134129s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-891062 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdany-port4262894509/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 service list -o json
functional_test.go:1490: Took "615.022275ms" to run "out/minikube-linux-arm64 -p functional-891062 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30381
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30381
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdspecific-port2257501889/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (467.639203ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdspecific-port2257501889/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 ssh "sudo umount -f /mount-9p": exit status 1 (306.269846ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-891062 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdspecific-port2257501889/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4214854123/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4214854123/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4214854123/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T" /mount1: exit status 1 (931.83377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-891062 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4214854123/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4214854123/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-891062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4214854123/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 version -o=json --components: (1.320344683s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-891062 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-891062
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-891062 image ls --format short --alsologtostderr:
I0311 12:57:08.510035  779946 out.go:291] Setting OutFile to fd 1 ...
I0311 12:57:08.510199  779946 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:08.510205  779946 out.go:304] Setting ErrFile to fd 2...
I0311 12:57:08.510210  779946 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:08.510483  779946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
I0311 12:57:08.511156  779946 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:08.511291  779946 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:08.511758  779946 cli_runner.go:164] Run: docker container inspect functional-891062 --format={{.State.Status}}
I0311 12:57:08.530203  779946 ssh_runner.go:195] Run: systemctl --version
I0311 12:57:08.530276  779946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-891062
I0311 12:57:08.554291  779946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33758 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/functional-891062/id_rsa Username:docker}
I0311 12:57:08.650298  779946 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-891062 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:be5e6f | 17.6MB |
| docker.io/library/nginx                     | latest             | sha256:760b7c | 67.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/library/minikube-local-cache-test | functional-891062  | sha256:5afd5d | 1.01kB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-891062 image ls --format table --alsologtostderr:
I0311 12:57:09.127304  780079 out.go:291] Setting OutFile to fd 1 ...
I0311 12:57:09.127463  780079 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:09.127473  780079 out.go:304] Setting ErrFile to fd 2...
I0311 12:57:09.127479  780079 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:09.127740  780079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
I0311 12:57:09.128447  780079 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:09.128583  780079 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:09.129129  780079 cli_runner.go:164] Run: docker container inspect functional-891062 --format={{.State.Status}}
I0311 12:57:09.160044  780079 ssh_runner.go:195] Run: systemctl --version
I0311 12:57:09.160102  780079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-891062
I0311 12:57:09.186998  780079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33758 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/functional-891062/id_rsa Username:docker}
I0311 12:57:09.285336  780079 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-891062 image ls --format json --alsologtostderr:
[{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601423"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3
d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc91
81bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"rep
oTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:5afd5d2f9466eb20387c372b79aa22069ea4963d54e8e856c9dbbe4047322b02","repoDigests":[],"repoTags":["docker.io/library/minikube-lo
cal-cache-test:functional-891062"],"size":"1006"},{"id":"sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":["docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216905"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-891062 image ls --format json --alsologtostderr:
I0311 12:57:08.841501  780007 out.go:291] Setting OutFile to fd 1 ...
I0311 12:57:08.845429  780007 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:08.845447  780007 out.go:304] Setting ErrFile to fd 2...
I0311 12:57:08.845453  780007 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:08.845732  780007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
I0311 12:57:08.846465  780007 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:08.846599  780007 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:08.847094  780007 cli_runner.go:164] Run: docker container inspect functional-891062 --format={{.State.Status}}
I0311 12:57:08.876975  780007 ssh_runner.go:195] Run: systemctl --version
I0311 12:57:08.877036  780007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-891062
I0311 12:57:08.900381  780007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33758 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/functional-891062/id_rsa Username:docker}
I0311 12:57:08.993351  780007 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-891062 image ls --format yaml --alsologtostderr:
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:5afd5d2f9466eb20387c372b79aa22069ea4963d54e8e856c9dbbe4047322b02
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-891062
size: "1006"
- id: sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "17601423"
- id: sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests:
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "67216905"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-891062 image ls --format yaml --alsologtostderr:
I0311 12:57:08.493382  779947 out.go:291] Setting OutFile to fd 1 ...
I0311 12:57:08.493598  779947 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:08.493628  779947 out.go:304] Setting ErrFile to fd 2...
I0311 12:57:08.493648  779947 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:08.493908  779947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
I0311 12:57:08.494614  779947 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:08.494791  779947 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:08.495352  779947 cli_runner.go:164] Run: docker container inspect functional-891062 --format={{.State.Status}}
I0311 12:57:08.521372  779947 ssh_runner.go:195] Run: systemctl --version
I0311 12:57:08.521424  779947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-891062
I0311 12:57:08.545185  779947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33758 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/functional-891062/id_rsa Username:docker}
I0311 12:57:08.641449  779947 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-891062 ssh pgrep buildkitd: exit status 1 (365.953627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image build -t localhost/my-image:functional-891062 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-891062 image build -t localhost/my-image:functional-891062 testdata/build --alsologtostderr: (2.143935912s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-891062 image build -t localhost/my-image:functional-891062 testdata/build --alsologtostderr:
I0311 12:57:09.157091  780084 out.go:291] Setting OutFile to fd 1 ...
I0311 12:57:09.157821  780084 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:09.157838  780084 out.go:304] Setting ErrFile to fd 2...
I0311 12:57:09.157845  780084 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 12:57:09.158303  780084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
I0311 12:57:09.159066  780084 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:09.160720  780084 config.go:182] Loaded profile config "functional-891062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 12:57:09.161337  780084 cli_runner.go:164] Run: docker container inspect functional-891062 --format={{.State.Status}}
I0311 12:57:09.186714  780084 ssh_runner.go:195] Run: systemctl --version
I0311 12:57:09.186766  780084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-891062
I0311 12:57:09.213982  780084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33758 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/functional-891062/id_rsa Username:docker}
I0311 12:57:09.313683  780084 build_images.go:151] Building image from path: /tmp/build.3252038381.tar
I0311 12:57:09.313748  780084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0311 12:57:09.325880  780084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3252038381.tar
I0311 12:57:09.330269  780084 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3252038381.tar: stat -c "%s %y" /var/lib/minikube/build/build.3252038381.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3252038381.tar': No such file or directory
I0311 12:57:09.330296  780084 ssh_runner.go:362] scp /tmp/build.3252038381.tar --> /var/lib/minikube/build/build.3252038381.tar (3072 bytes)
I0311 12:57:09.368981  780084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3252038381
I0311 12:57:09.378678  780084 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3252038381 -xf /var/lib/minikube/build/build.3252038381.tar
I0311 12:57:09.388307  780084 containerd.go:379] Building image: /var/lib/minikube/build/build.3252038381
I0311 12:57:09.388384  780084 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3252038381 --local dockerfile=/var/lib/minikube/build/build.3252038381 --output type=image,name=localhost/my-image:functional-891062
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:52d343d2a03d74122d09b99312a7b69e0952d8596f5818fd52a14a9104575708
#8 exporting manifest sha256:52d343d2a03d74122d09b99312a7b69e0952d8596f5818fd52a14a9104575708 0.0s done
#8 exporting config sha256:0150b98a40a940221412a82d115d705a4377eb9addc98bf6e28909c654362e09 0.0s done
#8 naming to localhost/my-image:functional-891062 done
#8 DONE 0.2s
I0311 12:57:11.183990  780084 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3252038381 --local dockerfile=/var/lib/minikube/build/build.3252038381 --output type=image,name=localhost/my-image:functional-891062: (1.795571384s)
I0311 12:57:11.184072  780084 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3252038381
I0311 12:57:11.194822  780084 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3252038381.tar
I0311 12:57:11.204455  780084 build_images.go:207] Built localhost/my-image:functional-891062 from /tmp/build.3252038381.tar
I0311 12:57:11.204493  780084 build_images.go:123] succeeded building to: functional-891062
I0311 12:57:11.204498  780084 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/03/11 12:56:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.341567746s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-891062
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image rm gcr.io/google-containers/addon-resizer:functional-891062 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-891062
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-891062 image save --daemon gcr.io/google-containers/addon-resizer:functional-891062 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-891062
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-891062
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-891062
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-891062
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (135.99s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-200723 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0311 12:57:36.425451  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-200723 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m15.077364133s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (135.99s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (22.19s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-200723 -- rollout status deployment/busybox: (19.051376401s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-gx2g9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-w7mzc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-wmn7f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-gx2g9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-w7mzc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-wmn7f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-gx2g9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-w7mzc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-wmn7f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (22.19s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0311 12:59:52.581570  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-gx2g9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-gx2g9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-w7mzc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-w7mzc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-wmn7f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-200723 -- exec busybox-5b5d89c9d6-wmn7f -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (27.51s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-200723 -v=7 --alsologtostderr
E0311 13:00:20.266317  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-200723 -v=7 --alsologtostderr: (26.501453401s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr: (1.010999944s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (27.51s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-200723 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (20.06s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-200723 status --output json -v=7 --alsologtostderr: (1.008113963s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp testdata/cp-test.txt ha-200723:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile772156106/001/cp-test_ha-200723.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723:/home/docker/cp-test.txt ha-200723-m02:/home/docker/cp-test_ha-200723_ha-200723-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m02 "sudo cat /home/docker/cp-test_ha-200723_ha-200723-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723:/home/docker/cp-test.txt ha-200723-m03:/home/docker/cp-test_ha-200723_ha-200723-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m03 "sudo cat /home/docker/cp-test_ha-200723_ha-200723-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723:/home/docker/cp-test.txt ha-200723-m04:/home/docker/cp-test_ha-200723_ha-200723-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m04 "sudo cat /home/docker/cp-test_ha-200723_ha-200723-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp testdata/cp-test.txt ha-200723-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile772156106/001/cp-test_ha-200723-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m02:/home/docker/cp-test.txt ha-200723:/home/docker/cp-test_ha-200723-m02_ha-200723.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723 "sudo cat /home/docker/cp-test_ha-200723-m02_ha-200723.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m02:/home/docker/cp-test.txt ha-200723-m03:/home/docker/cp-test_ha-200723-m02_ha-200723-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m03 "sudo cat /home/docker/cp-test_ha-200723-m02_ha-200723-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m02:/home/docker/cp-test.txt ha-200723-m04:/home/docker/cp-test_ha-200723-m02_ha-200723-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m04 "sudo cat /home/docker/cp-test_ha-200723-m02_ha-200723-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp testdata/cp-test.txt ha-200723-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile772156106/001/cp-test_ha-200723-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m03:/home/docker/cp-test.txt ha-200723:/home/docker/cp-test_ha-200723-m03_ha-200723.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723 "sudo cat /home/docker/cp-test_ha-200723-m03_ha-200723.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m03:/home/docker/cp-test.txt ha-200723-m02:/home/docker/cp-test_ha-200723-m03_ha-200723-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m02 "sudo cat /home/docker/cp-test_ha-200723-m03_ha-200723-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m03:/home/docker/cp-test.txt ha-200723-m04:/home/docker/cp-test_ha-200723-m03_ha-200723-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m04 "sudo cat /home/docker/cp-test_ha-200723-m03_ha-200723-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp testdata/cp-test.txt ha-200723-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile772156106/001/cp-test_ha-200723-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m04:/home/docker/cp-test.txt ha-200723:/home/docker/cp-test_ha-200723-m04_ha-200723.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723 "sudo cat /home/docker/cp-test_ha-200723-m04_ha-200723.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m04:/home/docker/cp-test.txt ha-200723-m02:/home/docker/cp-test_ha-200723-m04_ha-200723-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m02 "sudo cat /home/docker/cp-test_ha-200723-m04_ha-200723-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 cp ha-200723-m04:/home/docker/cp-test.txt ha-200723-m03:/home/docker/cp-test_ha-200723-m04_ha-200723-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 ssh -n ha-200723-m03 "sudo cat /home/docker/cp-test_ha-200723-m04_ha-200723-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (20.06s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-200723 node stop m02 -v=7 --alsologtostderr: (12.166385414s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr: exit status 7 (799.408439ms)

                                                
                                                
-- stdout --
	ha-200723
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-200723-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-200723-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-200723-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:00:54.999163  795366 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:00:54.999289  795366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:00:54.999299  795366 out.go:304] Setting ErrFile to fd 2...
	I0311 13:00:54.999304  795366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:00:54.999547  795366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 13:00:55.000214  795366 out.go:298] Setting JSON to false
	I0311 13:00:55.000260  795366 mustload.go:65] Loading cluster: ha-200723
	I0311 13:00:55.000731  795366 config.go:182] Loaded profile config "ha-200723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 13:00:55.000791  795366 status.go:255] checking status of ha-200723 ...
	I0311 13:00:55.001334  795366 cli_runner.go:164] Run: docker container inspect ha-200723 --format={{.State.Status}}
	I0311 13:00:55.004997  795366 notify.go:220] Checking for updates...
	I0311 13:00:55.042146  795366 status.go:330] ha-200723 host status = "Running" (err=<nil>)
	I0311 13:00:55.042186  795366 host.go:66] Checking if "ha-200723" exists ...
	I0311 13:00:55.042570  795366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-200723
	I0311 13:00:55.063262  795366 host.go:66] Checking if "ha-200723" exists ...
	I0311 13:00:55.063611  795366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:00:55.063672  795366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-200723
	I0311 13:00:55.089608  795366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33763 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/ha-200723/id_rsa Username:docker}
	I0311 13:00:55.194908  795366 ssh_runner.go:195] Run: systemctl --version
	I0311 13:00:55.199945  795366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:00:55.218440  795366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:00:55.298215  795366 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-11 13:00:55.285108523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:00:55.300196  795366 kubeconfig.go:125] found "ha-200723" server: "https://192.168.49.254:8443"
	I0311 13:00:55.300233  795366 api_server.go:166] Checking apiserver status ...
	I0311 13:00:55.300288  795366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:00:55.312708  795366 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	I0311 13:00:55.322369  795366 api_server.go:182] apiserver freezer: "4:freezer:/docker/dc4c908ceaf050dc121e0c7a6e46e929d65bc389c1c4009a1a34459af07d0fa0/kubepods/burstable/podf514dc33403ca0f337c3f50c4d6c9678/bb2061e058400ca7613aa6dc09a1d1397b4bc59c303cf4dad893cd8479dc80ce"
	I0311 13:00:55.322459  795366 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dc4c908ceaf050dc121e0c7a6e46e929d65bc389c1c4009a1a34459af07d0fa0/kubepods/burstable/podf514dc33403ca0f337c3f50c4d6c9678/bb2061e058400ca7613aa6dc09a1d1397b4bc59c303cf4dad893cd8479dc80ce/freezer.state
	I0311 13:00:55.331520  795366 api_server.go:204] freezer state: "THAWED"
	I0311 13:00:55.331547  795366 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0311 13:00:55.340273  795366 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0311 13:00:55.340301  795366 status.go:422] ha-200723 apiserver status = Running (err=<nil>)
	I0311 13:00:55.340313  795366 status.go:257] ha-200723 status: &{Name:ha-200723 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:00:55.340331  795366 status.go:255] checking status of ha-200723-m02 ...
	I0311 13:00:55.340661  795366 cli_runner.go:164] Run: docker container inspect ha-200723-m02 --format={{.State.Status}}
	I0311 13:00:55.358407  795366 status.go:330] ha-200723-m02 host status = "Stopped" (err=<nil>)
	I0311 13:00:55.358438  795366 status.go:343] host is not running, skipping remaining checks
	I0311 13:00:55.358447  795366 status.go:257] ha-200723-m02 status: &{Name:ha-200723-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:00:55.358469  795366 status.go:255] checking status of ha-200723-m03 ...
	I0311 13:00:55.358793  795366 cli_runner.go:164] Run: docker container inspect ha-200723-m03 --format={{.State.Status}}
	I0311 13:00:55.377826  795366 status.go:330] ha-200723-m03 host status = "Running" (err=<nil>)
	I0311 13:00:55.377852  795366 host.go:66] Checking if "ha-200723-m03" exists ...
	I0311 13:00:55.378170  795366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-200723-m03
	I0311 13:00:55.395461  795366 host.go:66] Checking if "ha-200723-m03" exists ...
	I0311 13:00:55.395785  795366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:00:55.395831  795366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-200723-m03
	I0311 13:00:55.420529  795366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/ha-200723-m03/id_rsa Username:docker}
	I0311 13:00:55.513964  795366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:00:55.525838  795366 kubeconfig.go:125] found "ha-200723" server: "https://192.168.49.254:8443"
	I0311 13:00:55.525866  795366 api_server.go:166] Checking apiserver status ...
	I0311 13:00:55.526012  795366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:00:55.538945  795366 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1324/cgroup
	I0311 13:00:55.548735  795366 api_server.go:182] apiserver freezer: "4:freezer:/docker/7627ee999d92a0584fe6cf5d78d8e4bdeca00adc37a690b2686ecd76888fd45a/kubepods/burstable/podd91c8e255473cd7820a7dae2659dc735/daef4f377a4eb9bc56fb6ec07f8dc3d9d99c972959bdbc29f87b9c5657896d96"
	I0311 13:00:55.548830  795366 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7627ee999d92a0584fe6cf5d78d8e4bdeca00adc37a690b2686ecd76888fd45a/kubepods/burstable/podd91c8e255473cd7820a7dae2659dc735/daef4f377a4eb9bc56fb6ec07f8dc3d9d99c972959bdbc29f87b9c5657896d96/freezer.state
	I0311 13:00:55.557747  795366 api_server.go:204] freezer state: "THAWED"
	I0311 13:00:55.557779  795366 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0311 13:00:55.566411  795366 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0311 13:00:55.566441  795366 status.go:422] ha-200723-m03 apiserver status = Running (err=<nil>)
	I0311 13:00:55.566451  795366 status.go:257] ha-200723-m03 status: &{Name:ha-200723-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:00:55.566469  795366 status.go:255] checking status of ha-200723-m04 ...
	I0311 13:00:55.566794  795366 cli_runner.go:164] Run: docker container inspect ha-200723-m04 --format={{.State.Status}}
	I0311 13:00:55.582984  795366 status.go:330] ha-200723-m04 host status = "Running" (err=<nil>)
	I0311 13:00:55.583012  795366 host.go:66] Checking if "ha-200723-m04" exists ...
	I0311 13:00:55.583407  795366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-200723-m04
	I0311 13:00:55.599323  795366 host.go:66] Checking if "ha-200723-m04" exists ...
	I0311 13:00:55.599687  795366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:00:55.599736  795366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-200723-m04
	I0311 13:00:55.618136  795366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33778 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/ha-200723-m04/id_rsa Username:docker}
	I0311 13:00:55.709892  795366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:00:55.721773  795366 status.go:257] ha-200723-m04 status: &{Name:ha-200723-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (18.89s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 node start m02 -v=7 --alsologtostderr
E0311 13:01:09.620316  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:09.625573  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:09.635836  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:09.656091  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:09.696481  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:09.776734  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:09.936977  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:10.257980  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:10.899078  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:12.179692  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-200723 node start m02 -v=7 --alsologtostderr: (17.752707146s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr
E0311 13:01:14.740558  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr: (1.029493573s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (18.89s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (128.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-200723 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-200723 -v=7 --alsologtostderr
E0311 13:01:19.861149  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:30.102094  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:01:50.583281  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-200723 -v=7 --alsologtostderr: (37.076050257s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-200723 --wait=true -v=7 --alsologtostderr
E0311 13:02:31.543699  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-200723 --wait=true -v=7 --alsologtostderr: (1m31.143571261s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-200723
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (128.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (11.7s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-200723 node delete m03 -v=7 --alsologtostderr: (10.664807674s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (11.70s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (36.18s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 stop -v=7 --alsologtostderr
E0311 13:03:53.463860  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-200723 stop -v=7 --alsologtostderr: (36.062412008s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr: exit status 7 (115.45338ms)

                                                
                                                
-- stdout --
	ha-200723
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-200723-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-200723-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:04:12.695192  808820 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:04:12.695343  808820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:04:12.695355  808820 out.go:304] Setting ErrFile to fd 2...
	I0311 13:04:12.695360  808820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:04:12.695604  808820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 13:04:12.695799  808820 out.go:298] Setting JSON to false
	I0311 13:04:12.695834  808820 mustload.go:65] Loading cluster: ha-200723
	I0311 13:04:12.695963  808820 notify.go:220] Checking for updates...
	I0311 13:04:12.696242  808820 config.go:182] Loaded profile config "ha-200723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 13:04:12.696253  808820 status.go:255] checking status of ha-200723 ...
	I0311 13:04:12.696740  808820 cli_runner.go:164] Run: docker container inspect ha-200723 --format={{.State.Status}}
	I0311 13:04:12.715317  808820 status.go:330] ha-200723 host status = "Stopped" (err=<nil>)
	I0311 13:04:12.715343  808820 status.go:343] host is not running, skipping remaining checks
	I0311 13:04:12.715365  808820 status.go:257] ha-200723 status: &{Name:ha-200723 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:04:12.715392  808820 status.go:255] checking status of ha-200723-m02 ...
	I0311 13:04:12.715695  808820 cli_runner.go:164] Run: docker container inspect ha-200723-m02 --format={{.State.Status}}
	I0311 13:04:12.732212  808820 status.go:330] ha-200723-m02 host status = "Stopped" (err=<nil>)
	I0311 13:04:12.732235  808820 status.go:343] host is not running, skipping remaining checks
	I0311 13:04:12.732243  808820 status.go:257] ha-200723-m02 status: &{Name:ha-200723-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:04:12.732263  808820 status.go:255] checking status of ha-200723-m04 ...
	I0311 13:04:12.732582  808820 cli_runner.go:164] Run: docker container inspect ha-200723-m04 --format={{.State.Status}}
	I0311 13:04:12.754662  808820 status.go:330] ha-200723-m04 host status = "Stopped" (err=<nil>)
	I0311 13:04:12.754688  808820 status.go:343] host is not running, skipping remaining checks
	I0311 13:04:12.754696  808820 status.go:257] ha-200723-m04 status: &{Name:ha-200723-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (36.18s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (60.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-200723 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0311 13:04:52.582426  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-200723 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (59.479169079s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (60.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (48.86s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-200723 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-200723 --control-plane -v=7 --alsologtostderr: (47.820889001s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-200723 status -v=7 --alsologtostderr: (1.035702679s)
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (48.86s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (57.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-101062 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0311 13:06:37.303998  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-101062 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (57.133033808s)
--- PASS: TestJSONOutput/start/Command (57.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-101062 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-101062 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-101062 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-101062 --output=json --user=testUser: (5.788275648s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-885377 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-885377 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.444873ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"65278e17-994c-46d6-b0d5-3d3504397a82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-885377] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9366854b-62ae-4bb0-bc34-8b1508eeb4cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18350"}}
	{"specversion":"1.0","id":"329927af-db8d-468c-8e90-3a2987f84241","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1d853883-398b-4909-9356-8e5fbc5eeb05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig"}}
	{"specversion":"1.0","id":"198977e6-93eb-4047-ae8e-8796156e57a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube"}}
	{"specversion":"1.0","id":"9b34e177-681a-47d8-88e2-d8ae1829bb11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8134d212-c283-4635-bef5-7696222f2b48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a3776490-51ac-47b7-a42d-0f3109dba392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-885377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-885377
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-196182 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-196182 --network=: (41.611422773s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-196182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-196182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-196182: (2.039576879s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.67s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-175336 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-175336 --network=bridge: (33.948329299s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-175336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-175336
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-175336: (1.628827557s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.60s)

                                                
                                    
x
+
TestKicExistingNetwork (37.5s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-698605 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-698605 --network=existing-network: (35.326115224s)
helpers_test.go:175: Cleaning up "existing-network-698605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-698605
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-698605: (2.029146856s)
--- PASS: TestKicExistingNetwork (37.50s)

                                                
                                    
x
+
TestKicCustomSubnet (36.23s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-236293 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-236293 --subnet=192.168.60.0/24: (34.182137319s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-236293 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-236293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-236293
E0311 13:09:52.581957  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-236293: (2.024388096s)
--- PASS: TestKicCustomSubnet (36.23s)

                                                
                                    
x
+
TestKicStaticIP (33.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-353704 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-353704 --static-ip=192.168.200.200: (31.576950571s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-353704 ip
helpers_test.go:175: Cleaning up "static-ip-353704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-353704
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-353704: (2.162862795s)
--- PASS: TestKicStaticIP (33.89s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-785812 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-785812 --driver=docker  --container-runtime=containerd: (31.009495979s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-788847 --driver=docker  --container-runtime=containerd
E0311 13:11:09.620866  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:11:15.626549  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-788847 --driver=docker  --container-runtime=containerd: (30.854897983s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-785812
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-788847
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-788847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-788847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-788847: (1.941958358s)
helpers_test.go:175: Cleaning up "first-785812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-785812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-785812: (2.210272107s)
--- PASS: TestMinikubeProfile (67.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-188982 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-188982 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.319085134s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-188982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-202364 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-202364 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.450763847s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-202364 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-188982 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-188982 --alsologtostderr -v=5: (1.680667229s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-202364 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-202364
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-202364: (1.205167489s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-202364
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-202364: (6.599959106s)
--- PASS: TestMountStart/serial/RestartStopped (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-202364 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (80.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-707324 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-707324 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.925240896s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (80.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (11.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-707324 -- rollout status deployment/busybox: (4.232808841s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-k2wb9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-k2wb9 -- nslookup kubernetes.io: (5.256845968s)
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-prx6b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-k2wb9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-prx6b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-k2wb9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-prx6b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (11.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-k2wb9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-k2wb9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-prx6b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-707324 -- exec busybox-5b5d89c9d6-prx6b -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-707324 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-707324 -v 3 --alsologtostderr: (16.282128105s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-707324 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp testdata/cp-test.txt multinode-707324:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile512228767/001/cp-test_multinode-707324.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324:/home/docker/cp-test.txt multinode-707324-m02:/home/docker/cp-test_multinode-707324_multinode-707324-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m02 "sudo cat /home/docker/cp-test_multinode-707324_multinode-707324-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324:/home/docker/cp-test.txt multinode-707324-m03:/home/docker/cp-test_multinode-707324_multinode-707324-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m03 "sudo cat /home/docker/cp-test_multinode-707324_multinode-707324-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp testdata/cp-test.txt multinode-707324-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile512228767/001/cp-test_multinode-707324-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324-m02:/home/docker/cp-test.txt multinode-707324:/home/docker/cp-test_multinode-707324-m02_multinode-707324.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324 "sudo cat /home/docker/cp-test_multinode-707324-m02_multinode-707324.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324-m02:/home/docker/cp-test.txt multinode-707324-m03:/home/docker/cp-test_multinode-707324-m02_multinode-707324-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m03 "sudo cat /home/docker/cp-test_multinode-707324-m02_multinode-707324-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp testdata/cp-test.txt multinode-707324-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile512228767/001/cp-test_multinode-707324-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324-m03:/home/docker/cp-test.txt multinode-707324:/home/docker/cp-test_multinode-707324-m03_multinode-707324.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324 "sudo cat /home/docker/cp-test_multinode-707324-m03_multinode-707324.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 cp multinode-707324-m03:/home/docker/cp-test.txt multinode-707324-m02:/home/docker/cp-test_multinode-707324-m03_multinode-707324-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 ssh -n multinode-707324-m02 "sudo cat /home/docker/cp-test_multinode-707324-m03_multinode-707324-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-707324 node stop m03: (1.243992561s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-707324 status: exit status 7 (506.506222ms)

                                                
                                                
-- stdout --
	multinode-707324
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-707324-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-707324-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-707324 status --alsologtostderr: exit status 7 (517.079373ms)

                                                
                                                
-- stdout --
	multinode-707324
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-707324-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-707324-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:14:03.440837  860483 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:14:03.441011  860483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:14:03.441024  860483 out.go:304] Setting ErrFile to fd 2...
	I0311 13:14:03.441031  860483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:14:03.441294  860483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 13:14:03.441503  860483 out.go:298] Setting JSON to false
	I0311 13:14:03.441550  860483 mustload.go:65] Loading cluster: multinode-707324
	I0311 13:14:03.441594  860483 notify.go:220] Checking for updates...
	I0311 13:14:03.441990  860483 config.go:182] Loaded profile config "multinode-707324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 13:14:03.442010  860483 status.go:255] checking status of multinode-707324 ...
	I0311 13:14:03.442527  860483 cli_runner.go:164] Run: docker container inspect multinode-707324 --format={{.State.Status}}
	I0311 13:14:03.461255  860483 status.go:330] multinode-707324 host status = "Running" (err=<nil>)
	I0311 13:14:03.461308  860483 host.go:66] Checking if "multinode-707324" exists ...
	I0311 13:14:03.461605  860483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-707324
	I0311 13:14:03.483369  860483 host.go:66] Checking if "multinode-707324" exists ...
	I0311 13:14:03.483758  860483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:14:03.483828  860483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-707324
	I0311 13:14:03.508900  860483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/multinode-707324/id_rsa Username:docker}
	I0311 13:14:03.606007  860483 ssh_runner.go:195] Run: systemctl --version
	I0311 13:14:03.610112  860483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:14:03.621599  860483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:14:03.679897  860483 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-11 13:14:03.67050437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:14:03.680498  860483 kubeconfig.go:125] found "multinode-707324" server: "https://192.168.67.2:8443"
	I0311 13:14:03.680530  860483 api_server.go:166] Checking apiserver status ...
	I0311 13:14:03.680582  860483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:14:03.691716  860483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1470/cgroup
	I0311 13:14:03.701103  860483 api_server.go:182] apiserver freezer: "4:freezer:/docker/c661987f8416720ae58ce68319ac1e8d09b7485482c97c7966f790689aef75b1/kubepods/burstable/pod4f8940708298368d003e6bfbe28cd65d/526b0591e70023274c3e3fc2dde92763d35c0636dc31becf9de01189e3702d55"
	I0311 13:14:03.701187  860483 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c661987f8416720ae58ce68319ac1e8d09b7485482c97c7966f790689aef75b1/kubepods/burstable/pod4f8940708298368d003e6bfbe28cd65d/526b0591e70023274c3e3fc2dde92763d35c0636dc31becf9de01189e3702d55/freezer.state
	I0311 13:14:03.710007  860483 api_server.go:204] freezer state: "THAWED"
	I0311 13:14:03.710040  860483 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0311 13:14:03.718497  860483 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0311 13:14:03.718524  860483 status.go:422] multinode-707324 apiserver status = Running (err=<nil>)
	I0311 13:14:03.718535  860483 status.go:257] multinode-707324 status: &{Name:multinode-707324 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:14:03.718554  860483 status.go:255] checking status of multinode-707324-m02 ...
	I0311 13:14:03.718881  860483 cli_runner.go:164] Run: docker container inspect multinode-707324-m02 --format={{.State.Status}}
	I0311 13:14:03.738871  860483 status.go:330] multinode-707324-m02 host status = "Running" (err=<nil>)
	I0311 13:14:03.738919  860483 host.go:66] Checking if "multinode-707324-m02" exists ...
	I0311 13:14:03.739216  860483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-707324-m02
	I0311 13:14:03.755051  860483 host.go:66] Checking if "multinode-707324-m02" exists ...
	I0311 13:14:03.755419  860483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:14:03.755471  860483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-707324-m02
	I0311 13:14:03.775379  860483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/18350-741028/.minikube/machines/multinode-707324-m02/id_rsa Username:docker}
	I0311 13:14:03.865730  860483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:14:03.877243  860483 status.go:257] multinode-707324-m02 status: &{Name:multinode-707324-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:14:03.877279  860483 status.go:255] checking status of multinode-707324-m03 ...
	I0311 13:14:03.877598  860483 cli_runner.go:164] Run: docker container inspect multinode-707324-m03 --format={{.State.Status}}
	I0311 13:14:03.894737  860483 status.go:330] multinode-707324-m03 host status = "Stopped" (err=<nil>)
	I0311 13:14:03.894761  860483 status.go:343] host is not running, skipping remaining checks
	I0311 13:14:03.894768  860483 status.go:257] multinode-707324-m03 status: &{Name:multinode-707324-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-707324 node start m03 -v=7 --alsologtostderr: (8.405907817s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (129.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-707324
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-707324
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-707324: (25.029901904s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-707324 --wait=true -v=8 --alsologtostderr
E0311 13:14:52.582061  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 13:16:09.620817  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-707324 --wait=true -v=8 --alsologtostderr: (1m44.120165999s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-707324
--- PASS: TestMultiNode/serial/RestartKeepsNodes (129.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-707324 node delete m03: (5.027109916s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-707324 stop: (23.822209635s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-707324 status: exit status 7 (101.54363ms)

                                                
                                                
-- stdout --
	multinode-707324
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-707324-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-707324 status --alsologtostderr: exit status 7 (91.139849ms)

                                                
                                                
-- stdout --
	multinode-707324
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-707324-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:16:52.126671  868706 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:16:52.126840  868706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:16:52.126851  868706 out.go:304] Setting ErrFile to fd 2...
	I0311 13:16:52.126857  868706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:16:52.127092  868706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 13:16:52.127281  868706 out.go:298] Setting JSON to false
	I0311 13:16:52.127317  868706 mustload.go:65] Loading cluster: multinode-707324
	I0311 13:16:52.127411  868706 notify.go:220] Checking for updates...
	I0311 13:16:52.127765  868706 config.go:182] Loaded profile config "multinode-707324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 13:16:52.127777  868706 status.go:255] checking status of multinode-707324 ...
	I0311 13:16:52.128316  868706 cli_runner.go:164] Run: docker container inspect multinode-707324 --format={{.State.Status}}
	I0311 13:16:52.145435  868706 status.go:330] multinode-707324 host status = "Stopped" (err=<nil>)
	I0311 13:16:52.145461  868706 status.go:343] host is not running, skipping remaining checks
	I0311 13:16:52.145469  868706 status.go:257] multinode-707324 status: &{Name:multinode-707324 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:16:52.145512  868706 status.go:255] checking status of multinode-707324-m02 ...
	I0311 13:16:52.145823  868706 cli_runner.go:164] Run: docker container inspect multinode-707324-m02 --format={{.State.Status}}
	I0311 13:16:52.161999  868706 status.go:330] multinode-707324-m02 host status = "Stopped" (err=<nil>)
	I0311 13:16:52.162023  868706 status.go:343] host is not running, skipping remaining checks
	I0311 13:16:52.162031  868706 status.go:257] multinode-707324-m02 status: &{Name:multinode-707324-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-707324 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0311 13:17:32.664901  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-707324 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.419719253s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-707324 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-707324
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-707324-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-707324-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.850661ms)

                                                
                                                
-- stdout --
	* [multinode-707324-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-707324-m02' is duplicated with machine name 'multinode-707324-m02' in profile 'multinode-707324'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-707324-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-707324-m03 --driver=docker  --container-runtime=containerd: (32.229396525s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-707324
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-707324: exit status 80 (316.028703ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-707324 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-707324-m03 already exists in multinode-707324-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-707324-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-707324-m03: (1.997722188s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.70s)

                                                
                                    
x
+
TestPreload (105.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-186684 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-186684 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m8.63312874s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-186684 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-186684 image pull gcr.io/k8s-minikube/busybox: (1.319648768s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-186684
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-186684: (12.097725089s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-186684 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0311 13:19:52.582183  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-186684 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.407396659s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-186684 image list
helpers_test.go:175: Cleaning up "test-preload-186684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-186684
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-186684: (2.38933838s)
--- PASS: TestPreload (105.22s)

                                                
                                    
x
+
TestScheduledStopUnix (105.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-048283 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-048283 --memory=2048 --driver=docker  --container-runtime=containerd: (29.535332547s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-048283 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-048283 -n scheduled-stop-048283
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-048283 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-048283 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-048283 -n scheduled-stop-048283
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-048283
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-048283 --schedule 15s
E0311 13:21:09.621875  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-048283
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-048283: exit status 7 (73.928083ms)

                                                
                                                
-- stdout --
	scheduled-stop-048283
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-048283 -n scheduled-stop-048283
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-048283 -n scheduled-stop-048283: exit status 7 (75.830619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-048283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-048283
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-048283: (4.205428533s)
--- PASS: TestScheduledStopUnix (105.30s)

                                                
                                    
x
+
TestInsufficientStorage (12.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-187222 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-187222 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.420407571s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d0f635b-f97b-4cde-926e-cdb868e411e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-187222] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fdcc3093-7ae1-4f82-b636-bfec56c49008","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18350"}}
	{"specversion":"1.0","id":"b49d07c6-b04d-4664-86f2-fd2324d69655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9d333a31-c28b-4512-a08e-b316fb5c0047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig"}}
	{"specversion":"1.0","id":"f6097b97-15f5-468e-b277-91831b1c16a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube"}}
	{"specversion":"1.0","id":"404f25a5-6666-4355-be7d-eb3b723dc320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"19b0e1b1-ee3f-44d2-a1ab-53070dfe1235","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2e187387-7982-4009-9c28-be4370baac24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"da24e3e5-ff1b-4b65-aa49-c4b3341937e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f9fed22b-8b36-4cce-b20f-ea42c59848c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"462ca71e-0165-40ff-a54c-684434d33442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"72678491-d285-4766-9e74-9e3bc293eecb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-187222\" primary control-plane node in \"insufficient-storage-187222\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcde48a4-e860-491d-9919-4aebdb7c1c33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7759bc7-3b24-4f0c-b1bd-a1b922add7db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0ed2fec-85f4-4c20-8bde-7c1e330f9159","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-187222 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-187222 --output=json --layout=cluster: exit status 7 (314.597441ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-187222","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-187222","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 13:22:02.238140  886303 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-187222" does not appear in /home/jenkins/minikube-integration/18350-741028/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-187222 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-187222 --output=json --layout=cluster: exit status 7 (296.865792ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-187222","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-187222","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 13:22:02.541093  886355 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-187222" does not appear in /home/jenkins/minikube-integration/18350-741028/kubeconfig
	E0311 13:22:02.552191  886355 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/insufficient-storage-187222/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-187222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-187222
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-187222: (1.883474434s)
--- PASS: TestInsufficientStorage (12.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (93.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.57387022 start -p running-upgrade-586391 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.57387022 start -p running-upgrade-586391 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.2293991s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-586391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-586391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.25799062s)
helpers_test.go:175: Cleaning up "running-upgrade-586391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-586391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-586391: (3.627037051s)
--- PASS: TestRunningBinaryUpgrade (93.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (394.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-053618 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-053618 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.501811968s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-053618
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-053618: (1.304475956s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-053618 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-053618 status --format={{.Host}}: exit status 7 (101.820378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-053618 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0311 13:24:52.598683  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-053618 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m4.346557865s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-053618 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-053618 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-053618 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (133.522907ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-053618] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-053618
	    minikube start -p kubernetes-upgrade-053618 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0536182 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-053618 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-053618 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-053618 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.24413479s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-053618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-053618
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-053618: (3.898081908s)
--- PASS: TestKubernetesUpgrade (394.74s)

                                                
                                    
x
+
TestMissingContainerUpgrade (162.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4208125706 start -p missing-upgrade-772176 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4208125706 start -p missing-upgrade-772176 --memory=2200 --driver=docker  --container-runtime=containerd: (1m31.410572379s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-772176
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-772176
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-772176 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-772176 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.856786203s)
helpers_test.go:175: Cleaning up "missing-upgrade-772176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-772176
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-772176: (2.058884425s)
--- PASS: TestMissingContainerUpgrade (162.31s)

                                                
                                    
x
+
TestPause/serial/Start (68.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-502778 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-502778 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m8.669276807s)
--- PASS: TestPause/serial/Start (68.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-102782 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-102782 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (112.543175ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-102782] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-102782 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-102782 --driver=docker  --container-runtime=containerd: (41.639843322s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-102782 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-102782 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-102782 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.537184929s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-102782 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-102782 status -o json: exit status 2 (374.496225ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-102782","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-102782
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-102782: (1.949151279s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-102782 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-102782 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.615099718s)
--- PASS: TestNoKubernetes/serial/Start (5.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-102782 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-102782 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.206803ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-102782
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-102782: (1.268896817s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-102782 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-102782 --driver=docker  --container-runtime=containerd: (6.733833858s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-502778 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-502778 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.440680728s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-102782 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-102782 "sudo systemctl is-active --quiet service kubelet": exit status 1 (261.805606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-502778 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-502778 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-502778 --output=json --layout=cluster: exit status 2 (329.842331ms)

                                                
                                                
-- stdout --
	{"Name":"pause-502778","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-502778","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-502778 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.12s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-502778 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-502778 --alsologtostderr -v=5: (1.117842631s)
--- PASS: TestPause/serial/PauseAgain (1.12s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-502778 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-502778 --alsologtostderr -v=5: (3.179709604s)
--- PASS: TestPause/serial/DeletePaused (3.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-502778
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-502778: exit status 1 (18.75904ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-502778: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (130.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1097940363 start -p stopped-upgrade-542170 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0311 13:26:09.620875  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1097940363 start -p stopped-upgrade-542170 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.25127091s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1097940363 -p stopped-upgrade-542170 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1097940363 -p stopped-upgrade-542170 stop: (19.884163639s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-542170 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0311 13:27:55.626775  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-542170 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.739973325s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (130.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-542170
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-542170: (1.450518s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-198981 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-198981 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (261.372073ms)

                                                
                                                
-- stdout --
	* [false-198981] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:30:05.575815  926550 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:30:05.576146  926550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:30:05.576177  926550 out.go:304] Setting ErrFile to fd 2...
	I0311 13:30:05.576202  926550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:30:05.576469  926550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-741028/.minikube/bin
	I0311 13:30:05.576965  926550 out.go:298] Setting JSON to false
	I0311 13:30:05.577916  926550 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18750,"bootTime":1710145056,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0311 13:30:05.578017  926550 start.go:139] virtualization:  
	I0311 13:30:05.580975  926550 out.go:177] * [false-198981] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 13:30:05.583088  926550 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 13:30:05.585331  926550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:30:05.583178  926550 notify.go:220] Checking for updates...
	I0311 13:30:05.590153  926550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-741028/kubeconfig
	I0311 13:30:05.592252  926550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-741028/.minikube
	I0311 13:30:05.593920  926550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 13:30:05.595722  926550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:30:05.598177  926550 config.go:182] Loaded profile config "force-systemd-flag-832517": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 13:30:05.598287  926550 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:30:05.621645  926550 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 13:30:05.621764  926550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:30:05.741637  926550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 13:30:05.727003465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:30:05.741739  926550 docker.go:295] overlay module found
	I0311 13:30:05.744460  926550 out.go:177] * Using the docker driver based on user configuration
	I0311 13:30:05.746062  926550 start.go:297] selected driver: docker
	I0311 13:30:05.746078  926550 start.go:901] validating driver "docker" against <nil>
	I0311 13:30:05.746091  926550 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:30:05.748446  926550 out.go:177] 
	W0311 13:30:05.750152  926550 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0311 13:30:05.751688  926550 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-198981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-198981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-198981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198981"

                                                
                                                
----------------------- debugLogs end: false-198981 [took: 4.769144206s] --------------------------------
helpers_test.go:175: Cleaning up "false-198981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-198981
--- PASS: TestNetworkPlugins/group/false (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (177.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-070145 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0311 13:34:12.665941  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-070145 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m57.464522158s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (177.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-740029 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-740029 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m18.511605385s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-070145 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [17caa9e1-97bb-4e5c-8974-a9a26e1d1913] Pending
helpers_test.go:344: "busybox" [17caa9e1-97bb-4e5c-8974-a9a26e1d1913] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [17caa9e1-97bb-4e5c-8974-a9a26e1d1913] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004012478s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-070145 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-070145 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-070145 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.179963331s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-070145 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-070145 --alsologtostderr -v=3
E0311 13:34:52.581690  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-070145 --alsologtostderr -v=3: (12.547961605s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-070145 -n old-k8s-version-070145
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-070145 -n old-k8s-version-070145: exit status 7 (101.639464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-070145 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-740029 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [574b94ff-40ae-4dc2-ad8a-4cb144464bb0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [574b94ff-40ae-4dc2-ad8a-4cb144464bb0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00369619s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-740029 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-740029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-740029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.491334567s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-740029 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-740029 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-740029 --alsologtostderr -v=3: (12.260189183s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-740029 -n no-preload-740029
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-740029 -n no-preload-740029: exit status 7 (90.118347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-740029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-740029 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0311 13:36:09.620969  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:39:52.582052  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-740029 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m48.849173857s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-740029 -n no-preload-740029
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mqn7m" [c04c2f8d-727f-437d-a88c-7499b2d52a06] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00507905s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mqn7m" [c04c2f8d-727f-437d-a88c-7499b2d52a06] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004269819s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-740029 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-740029 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-740029 --alsologtostderr -v=1
E0311 13:41:09.620475  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-740029 -n no-preload-740029
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-740029 -n no-preload-740029: exit status 2 (414.465808ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-740029 -n no-preload-740029
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-740029 -n no-preload-740029: exit status 2 (437.889217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-740029 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-740029 --alsologtostderr -v=1: (1.02683157s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-740029 -n no-preload-740029
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-740029 -n no-preload-740029
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-810824 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-810824 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m3.242368321s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zhrbn" [c3b264f6-0afc-4d97-aea4-7eada9ef47c6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00449494s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zhrbn" [c3b264f6-0afc-4d97-aea4-7eada9ef47c6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004752392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-070145 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-070145 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-070145 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-070145 -n old-k8s-version-070145
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-070145 -n old-k8s-version-070145: exit status 2 (376.677638ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-070145 -n old-k8s-version-070145
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-070145 -n old-k8s-version-070145: exit status 2 (464.872934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-070145 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-070145 --alsologtostderr -v=1: (1.317004757s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-070145 -n old-k8s-version-070145
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-070145 -n old-k8s-version-070145
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-697991 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-697991 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m7.194160172s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-810824 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cafdd021-deb8-4c57-99ac-cfee09ef5ecf] Pending
helpers_test.go:344: "busybox" [cafdd021-deb8-4c57-99ac-cfee09ef5ecf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cafdd021-deb8-4c57-99ac-cfee09ef5ecf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003975025s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-810824 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-810824 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-810824 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.095084539s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-810824 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-810824 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-810824 --alsologtostderr -v=3: (12.18803909s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-810824 -n embed-certs-810824
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-810824 -n embed-certs-810824: exit status 7 (83.641079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-810824 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-810824 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-810824 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m27.216362247s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-810824 -n embed-certs-810824
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-697991 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f0ba9424-f907-4c5b-ba5d-99422ba16d05] Pending
helpers_test.go:344: "busybox" [f0ba9424-f907-4c5b-ba5d-99422ba16d05] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f0ba9424-f907-4c5b-ba5d-99422ba16d05] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.0045472s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-697991 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-697991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-697991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.089998605s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-697991 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-697991 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-697991 --alsologtostderr -v=3: (12.058447705s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991: exit status 7 (87.617275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-697991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-697991 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0311 13:44:35.627752  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 13:44:38.687188  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:38.692443  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:38.702673  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:38.722925  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:38.763233  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:38.843514  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:39.003866  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:39.324662  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:39.965606  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:41.246419  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:43.806603  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:48.927440  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:44:52.581566  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 13:44:59.167940  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:45:19.648934  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:45:45.009016  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:45.017333  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:45.028247  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:45.048722  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:45.089307  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:45.169719  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:45.330204  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:45.650798  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:46.291742  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:47.571967  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:50.132985  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:45:55.253205  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:46:00.609918  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:46:05.493656  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:46:09.620009  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:46:25.974695  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:47:06.935282  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-697991 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m28.264040986s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tzm94" [f2eb7ac9-a0c7-41e0-80f1-538624fd35c5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007180448s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tzm94" [f2eb7ac9-a0c7-41e0-80f1-538624fd35c5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004158139s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-810824 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-810824 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-810824 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-810824 -n embed-certs-810824
E0311 13:47:22.530757  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-810824 -n embed-certs-810824: exit status 2 (326.950255ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-810824 -n embed-certs-810824
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-810824 -n embed-certs-810824: exit status 2 (331.476122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-810824 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-810824 -n embed-certs-810824
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-810824 -n embed-certs-810824
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-506010 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-506010 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (49.14146798s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qk6gf" [fa3a13fc-1301-4636-857c-7f64ab1adcae] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004139143s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qk6gf" [fa3a13fc-1301-4636-857c-7f64ab1adcae] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004519596s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-697991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-697991 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-697991 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-697991 --alsologtostderr -v=1: (1.014788986s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991: exit status 2 (432.617549ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991: exit status 2 (799.017579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-697991 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-697991 -n default-k8s-diff-port-697991
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m10.404495515s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-506010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-506010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.677772586s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-506010 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-506010 --alsologtostderr -v=3: (1.389018789s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-506010 -n newest-cni-506010
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-506010 -n newest-cni-506010: exit status 7 (114.042145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-506010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-506010 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0311 13:48:28.856024  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-506010 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (21.789644503s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-506010 -n newest-cni-506010
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-506010 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-506010 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-506010 -n newest-cni-506010
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-506010 -n newest-cni-506010: exit status 2 (396.609941ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-506010 -n newest-cni-506010
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-506010 -n newest-cni-506010: exit status 2 (387.326581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-506010 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-506010 --alsologtostderr -v=1: (1.283889484s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-506010 -n newest-cni-506010
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-506010 -n newest-cni-506010
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.15s)
E0311 13:54:16.362531  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:54:16.726167  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:16.731440  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:16.741707  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:16.761973  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:16.802224  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:16.882643  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:17.042834  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:17.363364  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:18.003594  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:19.284596  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:21.845343  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:26.965911  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:37.206156  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory
E0311 13:54:38.687153  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
E0311 13:54:48.166704  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:48.171979  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:48.182251  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:48.202593  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:48.242910  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:48.323297  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:48.483755  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:48.804249  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:49.445169  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:50.725999  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:52.582148  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
E0311 13:54:53.286833  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
E0311 13:54:57.687163  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/auto-198981/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (58.592493868s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-198981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-198981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rxmtg" [21291701-5516-42b6-8463-e9797c8ec73a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rxmtg" [21291701-5516-42b6-8463-e9797c8ec73a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.0044583s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-198981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6w8br" [2f144fc0-59d3-40a3-b7f4-281acedb0cb3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004507869s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0311 13:49:52.581726  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/addons-109866/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m21.596177694s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-198981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-198981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-22fm8" [bf66ce7d-0aab-436a-a46c-2e02321563a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-22fm8" [bf66ce7d-0aab-436a-a46c-2e02321563a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004273152s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-198981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0311 13:50:06.371410  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/old-k8s-version-070145/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0311 13:50:45.009312  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
E0311 13:50:52.667043  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:51:09.620434  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/functional-891062/client.crt: no such file or directory
E0311 13:51:12.696944  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/no-preload-740029/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m4.519541943s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nf9rd" [2b692060-e13b-46e9-9e91-727ba30347eb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008176168s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-198981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-198981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vcqpx" [3eaf3f18-2537-419f-a4e8-2e05afd176f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vcqpx" [3eaf3f18-2537-419f-a4e8-2e05afd176f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004176925s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-198981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-198981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-198981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bbxkk" [9e5ebd08-f4f3-4608-a628-5f2639d2dd6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bbxkk" [9e5ebd08-f4f3-4608-a628-5f2639d2dd6c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.008509503s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-198981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m30.366534121s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0311 13:52:54.440611  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:54.446114  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:54.456341  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:54.476593  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:54.516882  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:54.597181  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:54.757605  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:55.078116  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:55.718374  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:56.998936  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:52:59.559765  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:53:04.680050  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
E0311 13:53:14.920956  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.589801242s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f5vkb" [9afc3bd5-7264-49ba-8951-3488cdda9383] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004255996s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-198981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-198981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5pqc7" [703e5b72-23cb-4885-8b1c-0341d69f5ebc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5pqc7" [703e5b72-23cb-4885-8b1c-0341d69f5ebc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00421559s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-198981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-198981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mpnlj" [5f06597f-edae-43db-8c72-aefc65a148a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mpnlj" [5f06597f-edae-43db-8c72-aefc65a148a5] Running
E0311 13:53:35.401611  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/default-k8s-diff-port-697991/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004597371s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-198981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-198981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-198981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (57.386389629s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-198981 "pgrep -a kubelet"
E0311 13:54:58.407669  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-198981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9wv95" [9dfacb82-069f-490f-bda9-dffc824267af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9wv95" [9dfacb82-069f-490f-bda9-dffc824267af] Running
E0311 13:55:08.647861  746480 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-741028/.minikube/profiles/kindnet-198981/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004300736s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-198981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-198981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-201665 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-201665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-201665
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-622694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-622694
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-198981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-198981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-198981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198981"

                                                
                                                
----------------------- debugLogs end: kubenet-198981 [took: 4.211879971s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-198981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-198981
--- SKIP: TestNetworkPlugins/group/kubenet (4.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-198981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-198981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-198981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-198981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198981"

                                                
                                                
----------------------- debugLogs end: cilium-198981 [took: 5.226314631s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-198981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-198981
--- SKIP: TestNetworkPlugins/group/cilium (5.43s)

                                                
                                    
Copied to clipboard