Test Report: Docker_Linux_containerd_arm64 17735

                    
                      92ccbd1049dad7c606832f9da24cf8bb40191acf:2024-03-27:33769
                    
                

Test fail (7/335)

x
+
TestAddons/parallel/Ingress (38.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-135346 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-135346 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-135346 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e7e0be33-0343-4d3c-9527-5cad7d9c6e31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e7e0be33-0343-4d3c-9527-5cad7d9c6e31] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004704977s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-135346 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.070277414s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-135346 addons disable ingress-dns --alsologtostderr -v=1: (1.620692261s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-135346 addons disable ingress --alsologtostderr -v=1: (7.864581105s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-135346
helpers_test.go:235: (dbg) docker inspect addons-135346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04e8033b57cb59cfba5a52eb7e1f29197bdfa0e8319c9a919b5ae4c953570367",
	        "Created": "2024-03-27T22:02:51.644486326Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1417407,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-27T22:02:51.914902569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f9b5358e8c18dbe49e632154cad75e0968b2e103f621caff2c3ed996f4155861",
	        "ResolvConfPath": "/var/lib/docker/containers/04e8033b57cb59cfba5a52eb7e1f29197bdfa0e8319c9a919b5ae4c953570367/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04e8033b57cb59cfba5a52eb7e1f29197bdfa0e8319c9a919b5ae4c953570367/hostname",
	        "HostsPath": "/var/lib/docker/containers/04e8033b57cb59cfba5a52eb7e1f29197bdfa0e8319c9a919b5ae4c953570367/hosts",
	        "LogPath": "/var/lib/docker/containers/04e8033b57cb59cfba5a52eb7e1f29197bdfa0e8319c9a919b5ae4c953570367/04e8033b57cb59cfba5a52eb7e1f29197bdfa0e8319c9a919b5ae4c953570367-json.log",
	        "Name": "/addons-135346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-135346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-135346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b60d36909b67863670e6e1fdc1d8e4b2be8061b6b9c476daacfe75b3e9422fb-init/diff:/var/lib/docker/overlay2/9aff79c4d350679b403430af5e9f1b0f6423798443e2d342556eedd63c4805d0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b60d36909b67863670e6e1fdc1d8e4b2be8061b6b9c476daacfe75b3e9422fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b60d36909b67863670e6e1fdc1d8e4b2be8061b6b9c476daacfe75b3e9422fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b60d36909b67863670e6e1fdc1d8e4b2be8061b6b9c476daacfe75b3e9422fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-135346",
	                "Source": "/var/lib/docker/volumes/addons-135346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-135346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-135346",
	                "name.minikube.sigs.k8s.io": "addons-135346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "906e6f0df60225843fe9c037dbbb4b94bbcca7d3a8c43b2f0345d848d23c0723",
	            "SandboxKey": "/var/run/docker/netns/906e6f0df602",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34300"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34299"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34296"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34298"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34297"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-135346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "c9e8ff7961aff2ebe738fb47bd564f93d4e4fb582728e4ddc44c8fa18f2f4fe4",
	                    "EndpointID": "d3860590cad9c7c5287adb84a099ae9f4c1798a9e4d7f8dba296f9e9786f56d7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-135346",
	                        "04e8033b57cb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-135346 -n addons-135346
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-135346 logs -n 25: (1.866129622s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| delete  | -p download-only-223339              | download-only-223339   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| start   | -o=json --download-only              | download-only-296933   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | -p download-only-296933              |                        |         |                |                     |                     |
	|         | --force --alsologtostderr            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0  |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| delete  | -p download-only-296933              | download-only-296933   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| delete  | -p download-only-422079              | download-only-422079   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| delete  | -p download-only-223339              | download-only-223339   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| delete  | -p download-only-296933              | download-only-296933   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| start   | --download-only -p                   | download-docker-140125 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | download-docker-140125               |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p download-docker-140125            | download-docker-140125 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| start   | --download-only -p                   | binary-mirror-257059   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | binary-mirror-257059                 |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --binary-mirror                      |                        |         |                |                     |                     |
	|         | http://127.0.0.1:41911               |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-257059              | binary-mirror-257059   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| addons  | disable dashboard -p                 | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | addons-135346                        |                        |         |                |                     |                     |
	| addons  | enable dashboard -p                  | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | addons-135346                        |                        |         |                |                     |                     |
	| start   | -p addons-135346 --wait=true         | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:04 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |                |                     |                     |
	|         | --addons=registry                    |                        |         |                |                     |                     |
	|         | --addons=metrics-server              |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --addons=ingress                     |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |                |                     |                     |
	| ip      | addons-135346 ip                     | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	| addons  | addons-135346 addons disable         | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	|         | registry --alsologtostderr           |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-135346 addons                 | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	|         | disable metrics-server               |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	|         | addons-135346                        |                        |         |                |                     |                     |
	| ssh     | addons-135346 ssh curl -s            | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |                |                     |                     |
	|         | nginx.example.com'                   |                        |         |                |                     |                     |
	| ip      | addons-135346 ip                     | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	| addons  | addons-135346 addons                 | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	|         | disable csi-hostpath-driver          |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | addons-135346 addons disable         | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-135346 addons disable         | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |                |                     |                     |
	| addons  | addons-135346 addons                 | addons-135346          | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:05 UTC | 27 Mar 24 22:05 UTC |
	|         | disable volumesnapshots              |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 22:02:28
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 22:02:28.020137 1416963 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:02:28.020309 1416963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:02:28.020351 1416963 out.go:304] Setting ErrFile to fd 2...
	I0327 22:02:28.020365 1416963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:02:28.020604 1416963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:02:28.021124 1416963 out.go:298] Setting JSON to false
	I0327 22:02:28.022050 1416963 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20686,"bootTime":1711556262,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:02:28.022141 1416963 start.go:139] virtualization:  
	I0327 22:02:28.024860 1416963 out.go:177] * [addons-135346] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 22:02:28.027136 1416963 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 22:02:28.029355 1416963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:02:28.027262 1416963 notify.go:220] Checking for updates...
	I0327 22:02:28.031776 1416963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:02:28.034297 1416963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:02:28.036386 1416963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 22:02:28.038553 1416963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 22:02:28.040993 1416963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:02:28.060583 1416963 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:02:28.060725 1416963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:02:28.112880 1416963 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 22:02:28.103607789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:02:28.112996 1416963 docker.go:295] overlay module found
	I0327 22:02:28.115189 1416963 out.go:177] * Using the docker driver based on user configuration
	I0327 22:02:28.117346 1416963 start.go:297] selected driver: docker
	I0327 22:02:28.117363 1416963 start.go:901] validating driver "docker" against <nil>
	I0327 22:02:28.117379 1416963 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 22:02:28.118038 1416963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:02:28.183752 1416963 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 22:02:28.17320608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:02:28.183922 1416963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 22:02:28.184160 1416963 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 22:02:28.186356 1416963 out.go:177] * Using Docker driver with root privileges
	I0327 22:02:28.188628 1416963 cni.go:84] Creating CNI manager for ""
	I0327 22:02:28.188648 1416963 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:02:28.188671 1416963 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 22:02:28.188770 1416963 start.go:340] cluster config:
	{Name:addons-135346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-135346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:02:28.191344 1416963 out.go:177] * Starting "addons-135346" primary control-plane node in "addons-135346" cluster
	I0327 22:02:28.193328 1416963 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 22:02:28.195229 1416963 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 22:02:28.197121 1416963 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 22:02:28.197182 1416963 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0327 22:02:28.197195 1416963 cache.go:56] Caching tarball of preloaded images
	I0327 22:02:28.197198 1416963 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 22:02:28.197297 1416963 preload.go:173] Found /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 22:02:28.197307 1416963 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0327 22:02:28.197682 1416963 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/config.json ...
	I0327 22:02:28.197724 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/config.json: {Name:mk926dfcd3fdae3aec9b479997cc183d63903ace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:28.211076 1416963 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 22:02:28.211217 1416963 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 22:02:28.211244 1416963 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 22:02:28.211249 1416963 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 22:02:28.211257 1416963 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 22:02:28.211262 1416963 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 from local cache
	I0327 22:02:44.502372 1416963 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 from cached tarball
	I0327 22:02:44.502442 1416963 cache.go:194] Successfully downloaded all kic artifacts
	I0327 22:02:44.502474 1416963 start.go:360] acquireMachinesLock for addons-135346: {Name:mkeb39cdb54910d338861576c7c208f317108f00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 22:02:44.503309 1416963 start.go:364] duration metric: took 803.006µs to acquireMachinesLock for "addons-135346"
	I0327 22:02:44.503359 1416963 start.go:93] Provisioning new machine with config: &{Name:addons-135346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-135346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 22:02:44.503453 1416963 start.go:125] createHost starting for "" (driver="docker")
	I0327 22:02:44.505806 1416963 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0327 22:02:44.506056 1416963 start.go:159] libmachine.API.Create for "addons-135346" (driver="docker")
	I0327 22:02:44.506101 1416963 client.go:168] LocalClient.Create starting
	I0327 22:02:44.506219 1416963 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem
	I0327 22:02:45.026541 1416963 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/cert.pem
	I0327 22:02:45.204531 1416963 cli_runner.go:164] Run: docker network inspect addons-135346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0327 22:02:45.224941 1416963 cli_runner.go:211] docker network inspect addons-135346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0327 22:02:45.225042 1416963 network_create.go:281] running [docker network inspect addons-135346] to gather additional debugging logs...
	I0327 22:02:45.225065 1416963 cli_runner.go:164] Run: docker network inspect addons-135346
	W0327 22:02:45.264412 1416963 cli_runner.go:211] docker network inspect addons-135346 returned with exit code 1
	I0327 22:02:45.264458 1416963 network_create.go:284] error running [docker network inspect addons-135346]: docker network inspect addons-135346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-135346 not found
	I0327 22:02:45.264473 1416963 network_create.go:286] output of [docker network inspect addons-135346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-135346 not found
	
	** /stderr **
	I0327 22:02:45.264636 1416963 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 22:02:45.283748 1416963 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400253ddd0}
	I0327 22:02:45.283800 1416963 network_create.go:124] attempt to create docker network addons-135346 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0327 22:02:45.283865 1416963 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-135346 addons-135346
	I0327 22:02:45.369581 1416963 network_create.go:108] docker network addons-135346 192.168.49.0/24 created
	I0327 22:02:45.369666 1416963 kic.go:121] calculated static IP "192.168.49.2" for the "addons-135346" container
	I0327 22:02:45.369907 1416963 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0327 22:02:45.385230 1416963 cli_runner.go:164] Run: docker volume create addons-135346 --label name.minikube.sigs.k8s.io=addons-135346 --label created_by.minikube.sigs.k8s.io=true
	I0327 22:02:45.405833 1416963 oci.go:103] Successfully created a docker volume addons-135346
	I0327 22:02:45.405926 1416963 cli_runner.go:164] Run: docker run --rm --name addons-135346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135346 --entrypoint /usr/bin/test -v addons-135346:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib
	I0327 22:02:47.401436 1416963 cli_runner.go:217] Completed: docker run --rm --name addons-135346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135346 --entrypoint /usr/bin/test -v addons-135346:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib: (1.995473526s)
	I0327 22:02:47.401469 1416963 oci.go:107] Successfully prepared a docker volume addons-135346
	I0327 22:02:47.401498 1416963 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 22:02:47.401518 1416963 kic.go:194] Starting extracting preloaded images to volume ...
	I0327 22:02:47.401604 1416963 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-135346:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0327 22:02:51.579047 1416963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-135346:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.177395368s)
	I0327 22:02:51.579082 1416963 kic.go:203] duration metric: took 4.17756118s to extract preloaded images to volume ...
	W0327 22:02:51.579229 1416963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0327 22:02:51.579344 1416963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0327 22:02:51.631637 1416963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-135346 --name addons-135346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-135346 --network addons-135346 --ip 192.168.49.2 --volume addons-135346:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8
	I0327 22:02:51.926923 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Running}}
	I0327 22:02:51.946139 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:02:51.971215 1416963 cli_runner.go:164] Run: docker exec addons-135346 stat /var/lib/dpkg/alternatives/iptables
	I0327 22:02:52.053387 1416963 oci.go:144] the created container "addons-135346" has a running status.
	I0327 22:02:52.053419 1416963 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa...
	I0327 22:02:52.424256 1416963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0327 22:02:52.441547 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:02:52.466296 1416963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0327 22:02:52.466321 1416963 kic_runner.go:114] Args: [docker exec --privileged addons-135346 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0327 22:02:52.526494 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:02:52.553039 1416963 machine.go:94] provisionDockerMachine start ...
	I0327 22:02:52.553139 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:02:52.574246 1416963 main.go:141] libmachine: Using SSH client type: native
	I0327 22:02:52.574624 1416963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34300 <nil> <nil>}
	I0327 22:02:52.574640 1416963 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 22:02:52.738688 1416963 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-135346
	
	I0327 22:02:52.738711 1416963 ubuntu.go:169] provisioning hostname "addons-135346"
	I0327 22:02:52.738781 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:02:52.763993 1416963 main.go:141] libmachine: Using SSH client type: native
	I0327 22:02:52.764228 1416963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34300 <nil> <nil>}
	I0327 22:02:52.764240 1416963 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-135346 && echo "addons-135346" | sudo tee /etc/hostname
	I0327 22:02:52.929230 1416963 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-135346
	
	I0327 22:02:52.929387 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:02:52.957927 1416963 main.go:141] libmachine: Using SSH client type: native
	I0327 22:02:52.958177 1416963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34300 <nil> <nil>}
	I0327 22:02:52.958195 1416963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-135346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-135346/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-135346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 22:02:53.090502 1416963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 22:02:53.090529 1416963 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17735-1410709/.minikube CaCertPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17735-1410709/.minikube}
	I0327 22:02:53.090561 1416963 ubuntu.go:177] setting up certificates
	I0327 22:02:53.090596 1416963 provision.go:84] configureAuth start
	I0327 22:02:53.090659 1416963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135346
	I0327 22:02:53.105918 1416963 provision.go:143] copyHostCerts
	I0327 22:02:53.106002 1416963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.pem (1078 bytes)
	I0327 22:02:53.106133 1416963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17735-1410709/.minikube/cert.pem (1123 bytes)
	I0327 22:02:53.106198 1416963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17735-1410709/.minikube/key.pem (1675 bytes)
	I0327 22:02:53.106253 1416963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca-key.pem org=jenkins.addons-135346 san=[127.0.0.1 192.168.49.2 addons-135346 localhost minikube]
	I0327 22:02:53.931190 1416963 provision.go:177] copyRemoteCerts
	I0327 22:02:53.931269 1416963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 22:02:53.931313 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:02:53.947320 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:02:54.036658 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 22:02:54.064745 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 22:02:54.092116 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 22:02:54.118795 1416963 provision.go:87] duration metric: took 1.028182246s to configureAuth
	I0327 22:02:54.118828 1416963 ubuntu.go:193] setting minikube options for container-runtime
	I0327 22:02:54.119045 1416963 config.go:182] Loaded profile config "addons-135346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:02:54.119061 1416963 machine.go:97] duration metric: took 1.566003533s to provisionDockerMachine
	I0327 22:02:54.119069 1416963 client.go:171] duration metric: took 9.612959864s to LocalClient.Create
	I0327 22:02:54.119090 1416963 start.go:167] duration metric: took 9.613034725s to libmachine.API.Create "addons-135346"
	I0327 22:02:54.119103 1416963 start.go:293] postStartSetup for "addons-135346" (driver="docker")
	I0327 22:02:54.119115 1416963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 22:02:54.119175 1416963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 22:02:54.119227 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:02:54.135178 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:02:54.231448 1416963 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 22:02:54.234721 1416963 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 22:02:54.234757 1416963 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 22:02:54.234769 1416963 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 22:02:54.234776 1416963 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0327 22:02:54.234787 1416963 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-1410709/.minikube/addons for local assets ...
	I0327 22:02:54.234855 1416963 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-1410709/.minikube/files for local assets ...
	I0327 22:02:54.234881 1416963 start.go:296] duration metric: took 115.772108ms for postStartSetup
	I0327 22:02:54.235185 1416963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135346
	I0327 22:02:54.249968 1416963 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/config.json ...
	I0327 22:02:54.250317 1416963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 22:02:54.250378 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:02:54.265379 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:02:54.351242 1416963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0327 22:02:54.355797 1416963 start.go:128] duration metric: took 9.852328777s to createHost
	I0327 22:02:54.355825 1416963 start.go:83] releasing machines lock for "addons-135346", held for 9.852493072s
	I0327 22:02:54.355906 1416963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135346
	I0327 22:02:54.370001 1416963 ssh_runner.go:195] Run: cat /version.json
	I0327 22:02:54.370058 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:02:54.370366 1416963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 22:02:54.370512 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:02:54.385331 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:02:54.394581 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:02:54.478169 1416963 ssh_runner.go:195] Run: systemctl --version
	I0327 22:02:54.590746 1416963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 22:02:54.597154 1416963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0327 22:02:54.623231 1416963 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0327 22:02:54.623312 1416963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 22:02:54.655156 1416963 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0327 22:02:54.655181 1416963 start.go:494] detecting cgroup driver to use...
	I0327 22:02:54.655216 1416963 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 22:02:54.655268 1416963 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 22:02:54.667775 1416963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 22:02:54.679682 1416963 docker.go:217] disabling cri-docker service (if available) ...
	I0327 22:02:54.679791 1416963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 22:02:54.694293 1416963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 22:02:54.709225 1416963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 22:02:54.788856 1416963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 22:02:54.887344 1416963 docker.go:233] disabling docker service ...
	I0327 22:02:54.887436 1416963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 22:02:54.908001 1416963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 22:02:54.920298 1416963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 22:02:55.016091 1416963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 22:02:55.124378 1416963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 22:02:55.136597 1416963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 22:02:55.155404 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0327 22:02:55.165865 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 22:02:55.176587 1416963 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 22:02:55.176717 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 22:02:55.187889 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 22:02:55.199923 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 22:02:55.210851 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 22:02:55.221303 1416963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 22:02:55.230927 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 22:02:55.241229 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 22:02:55.251622 1416963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 22:02:55.261653 1416963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 22:02:55.270260 1416963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 22:02:55.278897 1416963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 22:02:55.362209 1416963 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 22:02:55.489050 1416963 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0327 22:02:55.489209 1416963 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0327 22:02:55.493515 1416963 start.go:562] Will wait 60s for crictl version
	I0327 22:02:55.493645 1416963 ssh_runner.go:195] Run: which crictl
	I0327 22:02:55.497094 1416963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 22:02:55.534725 1416963 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0327 22:02:55.534870 1416963 ssh_runner.go:195] Run: containerd --version
	I0327 22:02:55.555239 1416963 ssh_runner.go:195] Run: containerd --version
	I0327 22:02:55.580698 1416963 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0327 22:02:55.582921 1416963 cli_runner.go:164] Run: docker network inspect addons-135346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 22:02:55.596134 1416963 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0327 22:02:55.599845 1416963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 22:02:55.610587 1416963 kubeadm.go:877] updating cluster {Name:addons-135346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-135346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 22:02:55.610712 1416963 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 22:02:55.610772 1416963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 22:02:55.650520 1416963 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 22:02:55.650545 1416963 containerd.go:534] Images already preloaded, skipping extraction
	I0327 22:02:55.650609 1416963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 22:02:55.685143 1416963 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 22:02:55.685164 1416963 cache_images.go:84] Images are preloaded, skipping loading
	I0327 22:02:55.685173 1416963 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 containerd true true} ...
	I0327 22:02:55.685267 1416963 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-135346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-135346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 22:02:55.685334 1416963 ssh_runner.go:195] Run: sudo crictl info
	I0327 22:02:55.726975 1416963 cni.go:84] Creating CNI manager for ""
	I0327 22:02:55.727001 1416963 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:02:55.727011 1416963 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 22:02:55.727034 1416963 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-135346 NodeName:addons-135346 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 22:02:55.727170 1416963 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-135346"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 22:02:55.727249 1416963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 22:02:55.736032 1416963 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 22:02:55.736112 1416963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 22:02:55.744905 1416963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0327 22:02:55.762766 1416963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 22:02:55.780329 1416963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0327 22:02:55.798265 1416963 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0327 22:02:55.801791 1416963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 22:02:55.812464 1416963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 22:02:55.890449 1416963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 22:02:55.905468 1416963 certs.go:68] Setting up /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346 for IP: 192.168.49.2
	I0327 22:02:55.905563 1416963 certs.go:194] generating shared ca certs ...
	I0327 22:02:55.905599 1416963 certs.go:226] acquiring lock for ca certs: {Name:mk24b20553d2a6654488b5498452cac9c2150bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:55.905819 1416963 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.key
	I0327 22:02:56.253384 1416963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.crt ...
	I0327 22:02:56.253415 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.crt: {Name:mk7bed95c6a726ce6201156447a877634a7deffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:56.254383 1416963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.key ...
	I0327 22:02:56.254420 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.key: {Name:mkb43492750a8322f88b1856e33e562eca0711bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:56.254515 1416963 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.key
	I0327 22:02:56.710468 1416963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.crt ...
	I0327 22:02:56.710504 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.crt: {Name:mk4305e556f4b099386cb4d91a18193b3be80a56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:56.710712 1416963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.key ...
	I0327 22:02:56.710727 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.key: {Name:mkff90d62e5eab19f9aaef2a56d7064a925693c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:56.710842 1416963 certs.go:256] generating profile certs ...
	I0327 22:02:56.710917 1416963 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.key
	I0327 22:02:56.710935 1416963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt with IP's: []
	I0327 22:02:56.953289 1416963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt ...
	I0327 22:02:56.953322 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: {Name:mk8f747deff76c93d080b60806aa7576423ae5ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:56.953515 1416963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.key ...
	I0327 22:02:56.953528 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.key: {Name:mka64262d40408464b9ac08a08d663a3108ff608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:56.953615 1416963 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.key.a9633692
	I0327 22:02:56.953636 1416963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.crt.a9633692 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0327 22:02:57.404780 1416963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.crt.a9633692 ...
	I0327 22:02:57.404811 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.crt.a9633692: {Name:mk0f810b719a99fa6f75585023566c13e13245b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:57.404995 1416963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.key.a9633692 ...
	I0327 22:02:57.405009 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.key.a9633692: {Name:mk280b4380c182fef27fbe141fff618cd903667b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:57.405095 1416963 certs.go:381] copying /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.crt.a9633692 -> /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.crt
	I0327 22:02:57.405183 1416963 certs.go:385] copying /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.key.a9633692 -> /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.key
	I0327 22:02:57.405239 1416963 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/proxy-client.key
	I0327 22:02:57.405261 1416963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/proxy-client.crt with IP's: []
	I0327 22:02:58.142870 1416963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/proxy-client.crt ...
	I0327 22:02:58.142909 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/proxy-client.crt: {Name:mka7ce0b4b02ea5a1080119b12394cb7a3e3ba54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:58.143637 1416963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/proxy-client.key ...
	I0327 22:02:58.143660 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/proxy-client.key: {Name:mkcba9f730a98c484a48d89adfd3f83a1712105e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:02:58.143876 1416963 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 22:02:58.143922 1416963 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem (1078 bytes)
	I0327 22:02:58.143954 1416963 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/cert.pem (1123 bytes)
	I0327 22:02:58.143986 1416963 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/key.pem (1675 bytes)
	I0327 22:02:58.144576 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 22:02:58.169889 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0327 22:02:58.195688 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 22:02:58.227224 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 22:02:58.254728 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0327 22:02:58.283777 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 22:02:58.308442 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 22:02:58.333414 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 22:02:58.359292 1416963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 22:02:58.385407 1416963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 22:02:58.404567 1416963 ssh_runner.go:195] Run: openssl version
	I0327 22:02:58.410263 1416963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 22:02:58.419958 1416963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 22:02:58.423558 1416963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 22:02 /usr/share/ca-certificates/minikubeCA.pem
	I0327 22:02:58.423629 1416963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 22:02:58.430784 1416963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 22:02:58.440296 1416963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 22:02:58.443534 1416963 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 22:02:58.443584 1416963 kubeadm.go:391] StartCluster: {Name:addons-135346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-135346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:02:58.443669 1416963 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0327 22:02:58.443725 1416963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 22:02:58.480561 1416963 cri.go:89] found id: ""
	I0327 22:02:58.480659 1416963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 22:02:58.489911 1416963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 22:02:58.498886 1416963 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0327 22:02:58.498963 1416963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 22:02:58.508222 1416963 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 22:02:58.508241 1416963 kubeadm.go:156] found existing configuration files:
	
	I0327 22:02:58.508295 1416963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 22:02:58.517235 1416963 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 22:02:58.517323 1416963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 22:02:58.525819 1416963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 22:02:58.534816 1416963 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 22:02:58.534901 1416963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 22:02:58.543378 1416963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 22:02:58.552216 1416963 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 22:02:58.552286 1416963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 22:02:58.560988 1416963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 22:02:58.569815 1416963 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 22:02:58.569878 1416963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 22:02:58.578480 1416963 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0327 22:02:58.619462 1416963 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 22:02:58.619691 1416963 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 22:02:58.662191 1416963 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0327 22:02:58.662354 1416963 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0327 22:02:58.662438 1416963 kubeadm.go:309] OS: Linux
	I0327 22:02:58.662502 1416963 kubeadm.go:309] CGROUPS_CPU: enabled
	I0327 22:02:58.662582 1416963 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0327 22:02:58.662654 1416963 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0327 22:02:58.662726 1416963 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0327 22:02:58.662792 1416963 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0327 22:02:58.662873 1416963 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0327 22:02:58.662975 1416963 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0327 22:02:58.663047 1416963 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0327 22:02:58.663111 1416963 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0327 22:02:58.730111 1416963 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 22:02:58.730283 1416963 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 22:02:58.730439 1416963 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 22:02:58.958857 1416963 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 22:02:58.961942 1416963 out.go:204]   - Generating certificates and keys ...
	I0327 22:02:58.962178 1416963 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 22:02:58.962311 1416963 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 22:02:59.556124 1416963 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 22:02:59.738562 1416963 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 22:03:00.534509 1416963 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 22:03:00.734672 1416963 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 22:03:01.061895 1416963 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 22:03:01.062273 1416963 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-135346 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0327 22:03:01.213340 1416963 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 22:03:01.213475 1416963 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-135346 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0327 22:03:01.683229 1416963 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 22:03:02.029733 1416963 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 22:03:03.398154 1416963 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 22:03:03.398432 1416963 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 22:03:03.989338 1416963 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 22:03:04.147125 1416963 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 22:03:04.439951 1416963 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 22:03:04.880812 1416963 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 22:03:05.173653 1416963 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 22:03:05.174399 1416963 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 22:03:05.177334 1416963 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 22:03:05.179950 1416963 out.go:204]   - Booting up control plane ...
	I0327 22:03:05.180054 1416963 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 22:03:05.180129 1416963 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 22:03:05.180675 1416963 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 22:03:05.192863 1416963 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 22:03:05.193866 1416963 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 22:03:05.194105 1416963 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 22:03:05.294870 1416963 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 22:03:12.791760 1416963 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.502065 seconds
	I0327 22:03:12.814471 1416963 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 22:03:12.826617 1416963 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 22:03:13.352545 1416963 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 22:03:13.352746 1416963 kubeadm.go:309] [mark-control-plane] Marking the node addons-135346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 22:03:13.864639 1416963 kubeadm.go:309] [bootstrap-token] Using token: soycl3.pnefi2lyhzs0ulcp
	I0327 22:03:13.866988 1416963 out.go:204]   - Configuring RBAC rules ...
	I0327 22:03:13.867116 1416963 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 22:03:13.873705 1416963 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 22:03:13.881397 1416963 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 22:03:13.885012 1416963 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 22:03:13.892380 1416963 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 22:03:13.896149 1416963 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 22:03:13.909562 1416963 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 22:03:14.151861 1416963 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 22:03:14.279151 1416963 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 22:03:14.280447 1416963 kubeadm.go:309] 
	I0327 22:03:14.280515 1416963 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 22:03:14.280520 1416963 kubeadm.go:309] 
	I0327 22:03:14.280595 1416963 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 22:03:14.280599 1416963 kubeadm.go:309] 
	I0327 22:03:14.280626 1416963 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 22:03:14.280683 1416963 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 22:03:14.280732 1416963 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 22:03:14.280737 1416963 kubeadm.go:309] 
	I0327 22:03:14.280788 1416963 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 22:03:14.280792 1416963 kubeadm.go:309] 
	I0327 22:03:14.280839 1416963 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 22:03:14.280843 1416963 kubeadm.go:309] 
	I0327 22:03:14.280893 1416963 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 22:03:14.280974 1416963 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 22:03:14.281041 1416963 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 22:03:14.281045 1416963 kubeadm.go:309] 
	I0327 22:03:14.281125 1416963 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 22:03:14.281199 1416963 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 22:03:14.281206 1416963 kubeadm.go:309] 
	I0327 22:03:14.281286 1416963 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token soycl3.pnefi2lyhzs0ulcp \
	I0327 22:03:14.281384 1416963 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:038d7b5f932f10c373e3310f4e838f1265c6fa4d2eb6aaca998c756d4d02dd59 \
	I0327 22:03:14.281404 1416963 kubeadm.go:309] 	--control-plane 
	I0327 22:03:14.281409 1416963 kubeadm.go:309] 
	I0327 22:03:14.281489 1416963 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 22:03:14.281494 1416963 kubeadm.go:309] 
	I0327 22:03:14.281572 1416963 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token soycl3.pnefi2lyhzs0ulcp \
	I0327 22:03:14.281669 1416963 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:038d7b5f932f10c373e3310f4e838f1265c6fa4d2eb6aaca998c756d4d02dd59 
	I0327 22:03:14.285882 1416963 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0327 22:03:14.285998 1416963 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 22:03:14.286016 1416963 cni.go:84] Creating CNI manager for ""
	I0327 22:03:14.286024 1416963 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:03:14.288816 1416963 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0327 22:03:14.290623 1416963 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0327 22:03:14.295041 1416963 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0327 22:03:14.295066 1416963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0327 22:03:14.329784 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0327 22:03:14.666226 1416963 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 22:03:14.666364 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:14.666471 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-135346 minikube.k8s.io/updated_at=2024_03_27T22_03_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81 minikube.k8s.io/name=addons-135346 minikube.k8s.io/primary=true
	I0327 22:03:14.799605 1416963 ops.go:34] apiserver oom_adj: -16
	I0327 22:03:14.799690 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:15.300465 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:15.799812 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:16.300677 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:16.799824 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:17.300646 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:17.800776 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:18.299910 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:18.800066 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:19.300334 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:19.799847 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:20.300688 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:20.800774 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:21.300681 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:21.800511 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:22.300636 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:22.800297 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:23.300334 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:23.800093 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:24.299965 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:24.800333 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:25.300572 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:25.799885 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:26.299818 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:26.799815 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:27.299826 1416963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 22:03:27.425957 1416963 kubeadm.go:1107] duration metric: took 12.759640691s to wait for elevateKubeSystemPrivileges
	W0327 22:03:27.425995 1416963 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 22:03:27.426003 1416963 kubeadm.go:393] duration metric: took 28.982424385s to StartCluster
	I0327 22:03:27.426019 1416963 settings.go:142] acquiring lock: {Name:mk0422242c5bd9a591643a1eff705818469bc24b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:03:27.426144 1416963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:03:27.426563 1416963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/kubeconfig: {Name:mkbeaefc44aca3b944acccf918e2fc82ac53211f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:03:27.426771 1416963 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 22:03:27.429484 1416963 out.go:177] * Verifying Kubernetes components...
	I0327 22:03:27.426917 1416963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 22:03:27.427097 1416963 config.go:182] Loaded profile config "addons-135346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:03:27.427108 1416963 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0327 22:03:27.431883 1416963 addons.go:69] Setting yakd=true in profile "addons-135346"
	I0327 22:03:27.431893 1416963 addons.go:69] Setting ingress=true in profile "addons-135346"
	I0327 22:03:27.431920 1416963 addons.go:234] Setting addon ingress=true in "addons-135346"
	I0327 22:03:27.431922 1416963 addons.go:234] Setting addon yakd=true in "addons-135346"
	I0327 22:03:27.431952 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.431972 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.432453 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.432481 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.434313 1416963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 22:03:27.434556 1416963 addons.go:69] Setting cloud-spanner=true in profile "addons-135346"
	I0327 22:03:27.434584 1416963 addons.go:234] Setting addon cloud-spanner=true in "addons-135346"
	I0327 22:03:27.434611 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.435003 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.435494 1416963 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-135346"
	I0327 22:03:27.435550 1416963 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-135346"
	I0327 22:03:27.435577 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.435955 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.437555 1416963 addons.go:69] Setting ingress-dns=true in profile "addons-135346"
	I0327 22:03:27.437587 1416963 addons.go:234] Setting addon ingress-dns=true in "addons-135346"
	I0327 22:03:27.437626 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.438022 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.439276 1416963 addons.go:69] Setting inspektor-gadget=true in profile "addons-135346"
	I0327 22:03:27.439311 1416963 addons.go:234] Setting addon inspektor-gadget=true in "addons-135346"
	I0327 22:03:27.439341 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.439719 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.442217 1416963 addons.go:69] Setting default-storageclass=true in profile "addons-135346"
	I0327 22:03:27.442264 1416963 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-135346"
	I0327 22:03:27.442664 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.455868 1416963 addons.go:69] Setting metrics-server=true in profile "addons-135346"
	I0327 22:03:27.455920 1416963 addons.go:234] Setting addon metrics-server=true in "addons-135346"
	I0327 22:03:27.455961 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.456404 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.466601 1416963 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-135346"
	I0327 22:03:27.466922 1416963 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-135346"
	I0327 22:03:27.466958 1416963 addons.go:69] Setting gcp-auth=true in profile "addons-135346"
	I0327 22:03:27.466997 1416963 mustload.go:65] Loading cluster: addons-135346
	I0327 22:03:27.467047 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.467183 1416963 config.go:182] Loaded profile config "addons-135346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:03:27.466794 1416963 addons.go:69] Setting storage-provisioner=true in profile "addons-135346"
	I0327 22:03:27.482769 1416963 addons.go:234] Setting addon storage-provisioner=true in "addons-135346"
	I0327 22:03:27.482862 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.483908 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.466799 1416963 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-135346"
	I0327 22:03:27.487760 1416963 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-135346"
	I0327 22:03:27.488958 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.501685 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.466806 1416963 addons.go:69] Setting volumesnapshots=true in profile "addons-135346"
	I0327 22:03:27.511096 1416963 addons.go:234] Setting addon volumesnapshots=true in "addons-135346"
	I0327 22:03:27.511147 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.511635 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.523062 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.466784 1416963 addons.go:69] Setting registry=true in profile "addons-135346"
	I0327 22:03:27.526287 1416963 addons.go:234] Setting addon registry=true in "addons-135346"
	I0327 22:03:27.526329 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.528283 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.545952 1416963 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0327 22:03:27.548328 1416963 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0327 22:03:27.548349 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0327 22:03:27.548417 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.570969 1416963 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0327 22:03:27.575621 1416963 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 22:03:27.575695 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0327 22:03:27.575796 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.590704 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0327 22:03:27.601548 1416963 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0327 22:03:27.608046 1416963 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0327 22:03:27.608119 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0327 22:03:27.608236 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.614512 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0327 22:03:27.616590 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0327 22:03:27.618704 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0327 22:03:27.604118 1416963 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0327 22:03:27.604163 1416963 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 22:03:27.604170 1416963 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 22:03:27.622560 1416963 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0327 22:03:27.620895 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0327 22:03:27.626602 1416963 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 22:03:27.624735 1416963 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 22:03:27.624768 1416963 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0327 22:03:27.624775 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0327 22:03:27.624849 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.628471 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 22:03:27.628539 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.643489 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0327 22:03:27.643574 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.653190 1416963 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0327 22:03:27.666476 1416963 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 22:03:27.668657 1416963 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 22:03:27.668689 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 22:03:27.668755 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.666636 1416963 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 22:03:27.669041 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0327 22:03:27.669087 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.654939 1416963 addons.go:234] Setting addon default-storageclass=true in "addons-135346"
	I0327 22:03:27.694799 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0327 22:03:27.694982 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.695499 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.701818 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0327 22:03:27.717489 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0327 22:03:27.719476 1416963 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0327 22:03:27.718044 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.726167 1416963 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 22:03:27.726188 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0327 22:03:27.726243 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.726906 1416963 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0327 22:03:27.729525 1416963 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 22:03:27.729582 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0327 22:03:27.729671 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.747759 1416963 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-135346"
	I0327 22:03:27.747807 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:27.748202 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:27.763923 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.766780 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.802596 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.805830 1416963 out.go:177]   - Using image docker.io/registry:2.8.3
	I0327 22:03:27.807898 1416963 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0327 22:03:27.810061 1416963 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0327 22:03:27.810080 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0327 22:03:27.810159 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.825553 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.843636 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.866584 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.877684 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.888263 1416963 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 22:03:27.888282 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 22:03:27.888342 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.905450 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.905498 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.905446 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.923141 1416963 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0327 22:03:27.930700 1416963 out.go:177]   - Using image docker.io/busybox:stable
	I0327 22:03:27.932568 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.933258 1416963 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 22:03:27.933274 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0327 22:03:27.933335 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:27.955427 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:27.962063 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:28.423139 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 22:03:28.534426 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 22:03:28.563376 1416963 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.128891724s)
	I0327 22:03:28.563564 1416963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 22:03:28.563657 1416963 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.129308164s)
	I0327 22:03:28.563738 1416963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 22:03:28.572409 1416963 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 22:03:28.572477 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0327 22:03:28.576155 1416963 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 22:03:28.576224 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0327 22:03:28.582862 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 22:03:28.627533 1416963 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0327 22:03:28.627601 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0327 22:03:28.637011 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0327 22:03:28.665307 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 22:03:28.679149 1416963 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 22:03:28.679218 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0327 22:03:28.703091 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 22:03:28.756603 1416963 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0327 22:03:28.756672 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0327 22:03:28.770162 1416963 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0327 22:03:28.770224 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0327 22:03:28.784603 1416963 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 22:03:28.784675 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0327 22:03:28.790004 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 22:03:28.802451 1416963 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 22:03:28.802521 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0327 22:03:28.803001 1416963 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0327 22:03:28.803041 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0327 22:03:28.876065 1416963 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 22:03:28.876135 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 22:03:28.934181 1416963 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 22:03:28.934254 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0327 22:03:29.015603 1416963 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0327 22:03:29.015674 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0327 22:03:29.024472 1416963 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 22:03:29.024542 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0327 22:03:29.035311 1416963 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0327 22:03:29.035380 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0327 22:03:29.084316 1416963 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 22:03:29.084392 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0327 22:03:29.204040 1416963 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 22:03:29.204109 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0327 22:03:29.207853 1416963 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 22:03:29.207922 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 22:03:29.265834 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0327 22:03:29.291370 1416963 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0327 22:03:29.291440 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0327 22:03:29.310152 1416963 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 22:03:29.310221 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0327 22:03:29.313445 1416963 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 22:03:29.313513 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0327 22:03:29.396867 1416963 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 22:03:29.396944 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0327 22:03:29.405198 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 22:03:29.595943 1416963 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 22:03:29.595969 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0327 22:03:29.599114 1416963 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 22:03:29.599140 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0327 22:03:29.613578 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0327 22:03:29.681184 1416963 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0327 22:03:29.681213 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0327 22:03:29.877304 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 22:03:29.879915 1416963 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 22:03:29.879941 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0327 22:03:29.898484 1416963 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 22:03:29.898506 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0327 22:03:30.063554 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 22:03:30.073794 1416963 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 22:03:30.073870 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0327 22:03:30.354442 1416963 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 22:03:30.354521 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0327 22:03:30.550601 1416963 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 22:03:30.550627 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0327 22:03:30.809228 1416963 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 22:03:30.809253 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0327 22:03:31.050293 1416963 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 22:03:31.050326 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0327 22:03:31.289935 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 22:03:32.230067 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.806893593s)
	I0327 22:03:32.230135 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.695653535s)
	I0327 22:03:32.230364 1416963 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.66660194s)
	I0327 22:03:32.230706 1416963 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.667104483s)
	I0327 22:03:32.230737 1416963 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0327 22:03:32.231549 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.648618334s)
	I0327 22:03:32.231597 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.594530855s)
	I0327 22:03:32.232114 1416963 node_ready.go:35] waiting up to 6m0s for node "addons-135346" to be "Ready" ...
	I0327 22:03:32.237335 1416963 node_ready.go:49] node "addons-135346" has status "Ready":"True"
	I0327 22:03:32.237362 1416963 node_ready.go:38] duration metric: took 5.035686ms for node "addons-135346" to be "Ready" ...
	I0327 22:03:32.237373 1416963 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 22:03:32.250742 1416963 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fz7cw" in "kube-system" namespace to be "Ready" ...
	I0327 22:03:32.736458 1416963 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-135346" context rescaled to 1 replicas
	I0327 22:03:32.756232 1416963 pod_ready.go:97] error getting pod "coredns-76f75df574-fz7cw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-fz7cw" not found
	I0327 22:03:32.756311 1416963 pod_ready.go:81] duration metric: took 505.540152ms for pod "coredns-76f75df574-fz7cw" in "kube-system" namespace to be "Ready" ...
	E0327 22:03:32.756342 1416963 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-fz7cw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-fz7cw" not found
	I0327 22:03:32.756381 1416963 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rg6qv" in "kube-system" namespace to be "Ready" ...
	I0327 22:03:34.734942 1416963 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0327 22:03:34.735109 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:34.766084 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:34.769924 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:35.037131 1416963 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0327 22:03:35.100371 1416963 addons.go:234] Setting addon gcp-auth=true in "addons-135346"
	I0327 22:03:35.100438 1416963 host.go:66] Checking if "addons-135346" exists ...
	I0327 22:03:35.100884 1416963 cli_runner.go:164] Run: docker container inspect addons-135346 --format={{.State.Status}}
	I0327 22:03:35.124049 1416963 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0327 22:03:35.124116 1416963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135346
	I0327 22:03:35.144816 1416963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34300 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/addons-135346/id_rsa Username:docker}
	I0327 22:03:35.423869 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.758481077s)
	I0327 22:03:35.423960 1416963 addons.go:470] Verifying addon ingress=true in "addons-135346"
	I0327 22:03:35.427058 1416963 out.go:177] * Verifying ingress addon...
	I0327 22:03:35.424217 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.721058733s)
	I0327 22:03:35.424324 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.634241925s)
	I0327 22:03:35.424351 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.158451459s)
	I0327 22:03:35.424440 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.019171693s)
	I0327 22:03:35.424544 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.547210651s)
	I0327 22:03:35.424586 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.36093341s)
	I0327 22:03:35.424600 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.810868486s)
	I0327 22:03:35.427416 1416963 addons.go:470] Verifying addon registry=true in "addons-135346"
	I0327 22:03:35.434227 1416963 out.go:177] * Verifying registry addon...
	I0327 22:03:35.432091 1416963 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0327 22:03:35.427554 1416963 addons.go:470] Verifying addon metrics-server=true in "addons-135346"
	W0327 22:03:35.427544 1416963 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 22:03:35.438931 1416963 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0327 22:03:35.440843 1416963 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-135346 service yakd-dashboard -n yakd-dashboard
	
	I0327 22:03:35.441004 1416963 retry.go:31] will retry after 250.74571ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 22:03:35.445445 1416963 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0327 22:03:35.445464 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:35.446671 1416963 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 22:03:35.446719 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:35.695019 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 22:03:35.942586 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:35.945794 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:36.456857 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:36.464868 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:36.600667 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.310686024s)
	I0327 22:03:36.600748 1416963 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-135346"
	I0327 22:03:36.603432 1416963 out.go:177] * Verifying csi-hostpath-driver addon...
	I0327 22:03:36.600994 1416963 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.476917356s)
	I0327 22:03:36.607549 1416963 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0327 22:03:36.609842 1416963 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0327 22:03:36.621797 1416963 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 22:03:36.619959 1416963 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 22:03:36.623929 1416963 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 22:03:36.623971 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0327 22:03:36.624075 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:36.716815 1416963 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 22:03:36.716889 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0327 22:03:36.802322 1416963 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 22:03:36.802346 1416963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0327 22:03:36.868638 1416963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 22:03:36.942872 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:36.945770 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:37.114039 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:37.263396 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:37.442478 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:37.445020 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:37.477479 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.782370027s)
	I0327 22:03:37.615719 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:37.874487 1416963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.005754298s)
	I0327 22:03:37.877157 1416963 addons.go:470] Verifying addon gcp-auth=true in "addons-135346"
	I0327 22:03:37.879748 1416963 out.go:177] * Verifying gcp-auth addon...
	I0327 22:03:37.882331 1416963 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0327 22:03:37.887513 1416963 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0327 22:03:37.887572 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:37.943542 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:37.958640 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:38.114825 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:38.387117 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:38.444355 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:38.449464 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:38.615302 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:38.886589 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:38.947065 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:38.947863 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:39.115200 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:39.264466 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:39.389489 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:39.445120 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:39.449849 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:39.614725 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:39.887774 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:39.942330 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:39.946127 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:40.114579 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:40.387021 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:40.442770 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:40.445938 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:40.614087 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:40.886720 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:40.946571 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:40.950988 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:41.119053 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:41.387258 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:41.444153 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:41.449718 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:41.614522 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:41.764655 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:41.886693 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:41.944332 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:41.949358 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:42.115631 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:42.386400 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:42.444093 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:42.447985 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:42.613806 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:42.886289 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:42.943954 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:42.948687 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:43.114323 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:43.388979 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:43.443243 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:43.446436 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:43.615006 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:43.897795 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:43.943532 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:43.946396 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:44.117598 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:44.263749 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:44.387087 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:44.443588 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:44.448213 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:44.614106 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:44.886473 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:44.943678 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:44.948142 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:45.124283 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:45.389340 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:45.446687 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:45.446922 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:45.614032 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:45.886441 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:45.942809 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:45.945625 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:46.114306 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:46.386086 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:46.442499 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:46.446647 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:46.613868 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:46.762894 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:46.886522 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:46.942473 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:46.945381 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:47.114133 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:47.386997 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:47.443376 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:47.445652 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:47.613050 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:47.886891 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:47.942844 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:47.945723 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:48.114383 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:48.385809 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:48.443137 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:48.446008 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:48.614023 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:48.763281 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:48.886176 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:48.943673 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:48.946606 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:49.113321 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:49.386797 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:49.442917 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:49.446089 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:49.613957 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:49.886360 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:49.943018 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:49.947063 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:50.114787 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:50.386334 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:50.443293 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:50.446529 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:50.613177 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:50.764108 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:50.886166 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:50.943023 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:50.945775 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:51.113959 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:51.386356 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:51.442680 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:51.445736 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:51.614864 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:51.887330 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:51.942661 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:51.945575 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:52.113067 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:52.386834 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:52.442719 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:52.447141 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:52.613369 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:52.886124 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:52.943524 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:52.945605 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:53.113451 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:53.264052 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:53.386660 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:53.442851 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:53.445992 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:53.614228 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:53.886022 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:53.944159 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:53.948713 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:54.113969 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:54.386675 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:54.443958 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:54.447311 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:54.614818 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:54.885705 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:54.942902 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:54.946872 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:55.115523 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:55.270317 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:55.389431 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:55.444103 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:55.455441 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:55.615096 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:55.885732 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:55.942543 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:55.945277 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:56.113875 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:56.386476 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:56.445874 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:56.446400 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:56.625130 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:56.887161 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:56.942611 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:56.947534 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:57.113586 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:57.386844 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:57.443140 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:57.446035 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:57.613753 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:57.763207 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:03:57.886701 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:57.943744 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:57.945983 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:58.114029 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:58.386610 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:58.446084 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:58.446906 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:58.613842 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:58.886977 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:58.943408 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:58.946256 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:59.112865 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:59.386892 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:59.442717 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:59.445891 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:03:59.613612 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:03:59.887203 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:03:59.943246 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:03:59.948896 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:00.136474 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:00.290211 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:00.388163 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:00.470513 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:00.477325 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:00.615835 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:00.886825 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:00.944001 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:00.946758 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:01.113172 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:01.386909 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:01.443584 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:01.447541 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:01.613958 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:01.885976 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:01.943403 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:01.946458 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:02.113248 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:02.386454 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:02.443246 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:02.446335 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:02.613356 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:02.763506 1416963 pod_ready.go:102] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:02.886251 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:02.944370 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:02.946447 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:03.145733 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:03.390507 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:03.448304 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:03.452367 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:03.613812 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:03.886342 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:03.945044 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:03.948561 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:04.113729 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:04.386971 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:04.445703 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:04.448322 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:04.616285 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:04.769920 1416963 pod_ready.go:92] pod "coredns-76f75df574-rg6qv" in "kube-system" namespace has status "Ready":"True"
	I0327 22:04:04.769993 1416963 pod_ready.go:81] duration metric: took 32.013586349s for pod "coredns-76f75df574-rg6qv" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.770021 1416963 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-135346" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.792947 1416963 pod_ready.go:92] pod "etcd-addons-135346" in "kube-system" namespace has status "Ready":"True"
	I0327 22:04:04.793032 1416963 pod_ready.go:81] duration metric: took 22.982739ms for pod "etcd-addons-135346" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.793061 1416963 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-135346" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.812286 1416963 pod_ready.go:92] pod "kube-apiserver-addons-135346" in "kube-system" namespace has status "Ready":"True"
	I0327 22:04:04.812358 1416963 pod_ready.go:81] duration metric: took 19.261045ms for pod "kube-apiserver-addons-135346" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.812402 1416963 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-135346" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.828140 1416963 pod_ready.go:92] pod "kube-controller-manager-addons-135346" in "kube-system" namespace has status "Ready":"True"
	I0327 22:04:04.828203 1416963 pod_ready.go:81] duration metric: took 15.766067ms for pod "kube-controller-manager-addons-135346" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.828249 1416963 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xcxjd" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.842660 1416963 pod_ready.go:92] pod "kube-proxy-xcxjd" in "kube-system" namespace has status "Ready":"True"
	I0327 22:04:04.842730 1416963 pod_ready.go:81] duration metric: took 14.46079ms for pod "kube-proxy-xcxjd" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.842758 1416963 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-135346" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:04.886311 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:04.944906 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:04.946613 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:05.114599 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:05.160930 1416963 pod_ready.go:92] pod "kube-scheduler-addons-135346" in "kube-system" namespace has status "Ready":"True"
	I0327 22:04:05.160955 1416963 pod_ready.go:81] duration metric: took 318.175575ms for pod "kube-scheduler-addons-135346" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:05.160966 1416963 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:05.387088 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:05.443426 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:05.447017 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:05.614478 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:05.887041 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:05.944119 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:05.947049 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:06.116412 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:06.386729 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:06.445472 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:06.449530 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 22:04:06.613307 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:06.886612 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:06.942982 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:06.945723 1416963 kapi.go:107] duration metric: took 31.506792995s to wait for kubernetes.io/minikube-addons=registry ...
	I0327 22:04:07.114210 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:07.168016 1416963 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:07.386731 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:07.443102 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:07.613845 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:07.886632 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:07.942778 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:08.113858 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:08.386565 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:08.444518 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:08.613165 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:08.885816 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:08.943421 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:09.114260 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:09.168088 1416963 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:09.395963 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:09.443891 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:09.613816 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:09.889023 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:09.944258 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:10.115228 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:10.386949 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:10.449260 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:10.615533 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:10.887055 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:10.943130 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:11.114398 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:11.168371 1416963 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:11.386341 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:11.447202 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:11.614435 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:11.886066 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:11.943996 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:12.114590 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:12.389045 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:12.444750 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:12.620571 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:12.886053 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:12.942510 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:13.114157 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:13.174061 1416963 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:13.387200 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:13.442663 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:13.614073 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:13.886219 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:13.943133 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:14.114045 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:14.386631 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:14.443238 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:14.614815 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:14.890013 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:14.943492 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:15.114188 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:15.385995 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:15.444052 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:15.614578 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:15.668292 1416963 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:15.886846 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:15.942807 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:16.114044 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:16.387189 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:16.442566 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:16.613211 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:16.886711 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:16.943652 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:17.113433 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:17.386826 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:17.442588 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:17.613950 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:17.885858 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:17.944216 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:18.116289 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:18.168140 1416963 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:18.387538 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:18.443584 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:18.613236 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:18.886944 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:18.943871 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:19.113420 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:19.387794 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:19.443130 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:19.614458 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:19.886861 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:19.944593 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:20.123964 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:20.175332 1416963 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace has status "Ready":"False"
	I0327 22:04:20.386143 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:20.443216 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:20.614984 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:20.889925 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:20.943868 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:21.114876 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:21.167361 1416963 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace has status "Ready":"True"
	I0327 22:04:21.167438 1416963 pod_ready.go:81] duration metric: took 16.006463736s for pod "nvidia-device-plugin-daemonset-6r9b9" in "kube-system" namespace to be "Ready" ...
	I0327 22:04:21.167462 1416963 pod_ready.go:38] duration metric: took 48.93007715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 22:04:21.167478 1416963 api_server.go:52] waiting for apiserver process to appear ...
	I0327 22:04:21.167553 1416963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 22:04:21.183773 1416963 api_server.go:72] duration metric: took 53.756967627s to wait for apiserver process to appear ...
	I0327 22:04:21.183798 1416963 api_server.go:88] waiting for apiserver healthz status ...
	I0327 22:04:21.183818 1416963 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 22:04:21.191481 1416963 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0327 22:04:21.192715 1416963 api_server.go:141] control plane version: v1.29.3
	I0327 22:04:21.192741 1416963 api_server.go:131] duration metric: took 8.935591ms to wait for apiserver health ...
	I0327 22:04:21.192749 1416963 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 22:04:21.203205 1416963 system_pods.go:59] 18 kube-system pods found
	I0327 22:04:21.203240 1416963 system_pods.go:61] "coredns-76f75df574-rg6qv" [80e6875a-1800-475c-98ff-f551eb204d82] Running
	I0327 22:04:21.203247 1416963 system_pods.go:61] "csi-hostpath-attacher-0" [51711cce-e8c1-4a42-a161-164a2fe99e05] Running
	I0327 22:04:21.203251 1416963 system_pods.go:61] "csi-hostpath-resizer-0" [a67fe2dd-8fc3-4e38-a4db-bdd1d0dce7c4] Running
	I0327 22:04:21.203315 1416963 system_pods.go:61] "csi-hostpathplugin-brcrz" [b72cf851-5985-465b-8717-e2c10feeb71b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 22:04:21.203331 1416963 system_pods.go:61] "etcd-addons-135346" [4e6a8669-6954-4ef1-a0a1-c4a9910c4a1d] Running
	I0327 22:04:21.203337 1416963 system_pods.go:61] "kindnet-7zx7h" [ed9fe405-017a-4a64-8ecc-b6ee19087650] Running
	I0327 22:04:21.203344 1416963 system_pods.go:61] "kube-apiserver-addons-135346" [ae5b1eeb-a650-4b37-806b-9723f441e4dc] Running
	I0327 22:04:21.203351 1416963 system_pods.go:61] "kube-controller-manager-addons-135346" [176b2b32-6edd-48f0-b8e5-45179b051c15] Running
	I0327 22:04:21.203357 1416963 system_pods.go:61] "kube-ingress-dns-minikube" [b098c2f9-1972-41a0-866d-39b139e2756d] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0327 22:04:21.203361 1416963 system_pods.go:61] "kube-proxy-xcxjd" [dc815817-8aed-4546-b159-f6ed1b2433e6] Running
	I0327 22:04:21.203372 1416963 system_pods.go:61] "kube-scheduler-addons-135346" [3b1beb42-8f7c-4b6f-bb9c-fa6610fa2c6b] Running
	I0327 22:04:21.203391 1416963 system_pods.go:61] "metrics-server-69cf46c98-7c8jn" [f9a04f3b-1673-4048-884b-d69a6e500b20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0327 22:04:21.203402 1416963 system_pods.go:61] "nvidia-device-plugin-daemonset-6r9b9" [7678e1f1-ecdb-4201-aaa3-0b78cfb78319] Running
	I0327 22:04:21.203407 1416963 system_pods.go:61] "registry-proxy-8lb9d" [ce79ef2c-7855-4f60-b346-89e09148c93c] Running
	I0327 22:04:21.203410 1416963 system_pods.go:61] "registry-xxnz7" [97d2d3a2-6694-404e-af05-f67f3d8df2fd] Running
	I0327 22:04:21.203427 1416963 system_pods.go:61] "snapshot-controller-58dbcc7b99-4v2fk" [2200990d-0851-4e03-95c9-10b88d1b37ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 22:04:21.203442 1416963 system_pods.go:61] "snapshot-controller-58dbcc7b99-r8mzz" [71fa7bb9-593c-414e-a210-f3230570440e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 22:04:21.203449 1416963 system_pods.go:61] "storage-provisioner" [445956a3-afe0-475f-a0e9-7913759a1d50] Running
	I0327 22:04:21.203458 1416963 system_pods.go:74] duration metric: took 10.703215ms to wait for pod list to return data ...
	I0327 22:04:21.203467 1416963 default_sa.go:34] waiting for default service account to be created ...
	I0327 22:04:21.206227 1416963 default_sa.go:45] found service account: "default"
	I0327 22:04:21.206253 1416963 default_sa.go:55] duration metric: took 2.774956ms for default service account to be created ...
	I0327 22:04:21.206262 1416963 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 22:04:21.216705 1416963 system_pods.go:86] 18 kube-system pods found
	I0327 22:04:21.216739 1416963 system_pods.go:89] "coredns-76f75df574-rg6qv" [80e6875a-1800-475c-98ff-f551eb204d82] Running
	I0327 22:04:21.216747 1416963 system_pods.go:89] "csi-hostpath-attacher-0" [51711cce-e8c1-4a42-a161-164a2fe99e05] Running
	I0327 22:04:21.216752 1416963 system_pods.go:89] "csi-hostpath-resizer-0" [a67fe2dd-8fc3-4e38-a4db-bdd1d0dce7c4] Running
	I0327 22:04:21.216760 1416963 system_pods.go:89] "csi-hostpathplugin-brcrz" [b72cf851-5985-465b-8717-e2c10feeb71b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 22:04:21.216765 1416963 system_pods.go:89] "etcd-addons-135346" [4e6a8669-6954-4ef1-a0a1-c4a9910c4a1d] Running
	I0327 22:04:21.216771 1416963 system_pods.go:89] "kindnet-7zx7h" [ed9fe405-017a-4a64-8ecc-b6ee19087650] Running
	I0327 22:04:21.216777 1416963 system_pods.go:89] "kube-apiserver-addons-135346" [ae5b1eeb-a650-4b37-806b-9723f441e4dc] Running
	I0327 22:04:21.216781 1416963 system_pods.go:89] "kube-controller-manager-addons-135346" [176b2b32-6edd-48f0-b8e5-45179b051c15] Running
	I0327 22:04:21.216795 1416963 system_pods.go:89] "kube-ingress-dns-minikube" [b098c2f9-1972-41a0-866d-39b139e2756d] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0327 22:04:21.216802 1416963 system_pods.go:89] "kube-proxy-xcxjd" [dc815817-8aed-4546-b159-f6ed1b2433e6] Running
	I0327 22:04:21.216806 1416963 system_pods.go:89] "kube-scheduler-addons-135346" [3b1beb42-8f7c-4b6f-bb9c-fa6610fa2c6b] Running
	I0327 22:04:21.216813 1416963 system_pods.go:89] "metrics-server-69cf46c98-7c8jn" [f9a04f3b-1673-4048-884b-d69a6e500b20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0327 22:04:21.216823 1416963 system_pods.go:89] "nvidia-device-plugin-daemonset-6r9b9" [7678e1f1-ecdb-4201-aaa3-0b78cfb78319] Running
	I0327 22:04:21.216828 1416963 system_pods.go:89] "registry-proxy-8lb9d" [ce79ef2c-7855-4f60-b346-89e09148c93c] Running
	I0327 22:04:21.216833 1416963 system_pods.go:89] "registry-xxnz7" [97d2d3a2-6694-404e-af05-f67f3d8df2fd] Running
	I0327 22:04:21.216848 1416963 system_pods.go:89] "snapshot-controller-58dbcc7b99-4v2fk" [2200990d-0851-4e03-95c9-10b88d1b37ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 22:04:21.216854 1416963 system_pods.go:89] "snapshot-controller-58dbcc7b99-r8mzz" [71fa7bb9-593c-414e-a210-f3230570440e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 22:04:21.216861 1416963 system_pods.go:89] "storage-provisioner" [445956a3-afe0-475f-a0e9-7913759a1d50] Running
	I0327 22:04:21.216868 1416963 system_pods.go:126] duration metric: took 10.600596ms to wait for k8s-apps to be running ...
	I0327 22:04:21.216876 1416963 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 22:04:21.216937 1416963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 22:04:21.230687 1416963 system_svc.go:56] duration metric: took 13.800599ms WaitForService to wait for kubelet
	I0327 22:04:21.230760 1416963 kubeadm.go:576] duration metric: took 53.803959047s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 22:04:21.230802 1416963 node_conditions.go:102] verifying NodePressure condition ...
	I0327 22:04:21.234192 1416963 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 22:04:21.234224 1416963 node_conditions.go:123] node cpu capacity is 2
	I0327 22:04:21.234237 1416963 node_conditions.go:105] duration metric: took 3.401442ms to run NodePressure ...
	I0327 22:04:21.234250 1416963 start.go:240] waiting for startup goroutines ...
	I0327 22:04:21.386576 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:21.448232 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:21.614057 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:21.887500 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:21.944191 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:22.116697 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:22.386897 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:22.442827 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:22.613953 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:22.888062 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:22.944661 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:23.114935 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:23.386452 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:23.444120 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:23.614529 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:23.888926 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:23.944109 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:24.121736 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:24.386797 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:24.443864 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:24.613578 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:24.885916 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:24.943795 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:25.114121 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:25.386268 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:25.444766 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:25.614163 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:25.885972 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:25.951273 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:26.114707 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:26.388355 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:26.443686 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:26.614079 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:26.887247 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:26.944175 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:27.114176 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:27.386256 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:27.443587 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:27.613230 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:27.886992 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:27.945243 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:28.124457 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:28.387155 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:28.443815 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:28.614748 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:28.887900 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:28.944582 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:29.113900 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:29.386647 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:29.443035 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:29.613968 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:29.886532 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:29.943090 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:30.140520 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:30.386471 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:30.442978 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:30.613914 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:30.886843 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:30.943727 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:31.113510 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:31.389454 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:31.442690 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:31.614016 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:31.886890 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:31.942360 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:32.113339 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:32.385840 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:32.442539 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:32.614827 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:32.886943 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:32.951257 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:33.113284 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:33.387067 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:33.442757 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:33.613622 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:33.886642 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:33.943721 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:34.114007 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:34.387112 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:34.442529 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:34.613829 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 22:04:34.886963 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:34.943612 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:35.113772 1416963 kapi.go:107] duration metric: took 58.50622158s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0327 22:04:35.387000 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:35.442397 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:35.886506 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:35.943736 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:36.386926 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:36.442755 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:36.886160 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:36.942365 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:37.386454 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:37.445034 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:37.886166 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:37.943727 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:38.385968 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:38.442946 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:38.886013 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:38.943741 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:39.386792 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:39.442559 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:39.886654 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:39.943949 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:40.386563 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:40.443143 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:40.885741 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:40.948711 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:41.386231 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:41.446441 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:41.886743 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:41.943713 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:42.388486 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:42.442358 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:42.886120 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:42.943692 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:43.386397 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:43.443369 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:43.885976 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:43.943646 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:44.387620 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:44.443724 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:44.886819 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:44.947887 1416963 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 22:04:45.387387 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:45.444758 1416963 kapi.go:107] duration metric: took 1m10.012666586s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0327 22:04:45.887017 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:46.386680 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:46.932331 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:47.386886 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:47.886597 1416963 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 22:04:48.385663 1416963 kapi.go:107] duration metric: took 1m10.50333024s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0327 22:04:48.387831 1416963 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-135346 cluster.
	I0327 22:04:48.390058 1416963 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0327 22:04:48.392030 1416963 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0327 22:04:48.394210 1416963 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, default-storageclass, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0327 22:04:48.396135 1416963 addons.go:505] duration metric: took 1m20.969018404s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner default-storageclass ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0327 22:04:48.396186 1416963 start.go:245] waiting for cluster config update ...
	I0327 22:04:48.396213 1416963 start.go:254] writing updated cluster config ...
	I0327 22:04:48.396505 1416963 ssh_runner.go:195] Run: rm -f paused
	I0327 22:04:48.728982 1416963 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 22:04:48.731684 1416963 out.go:177] * Done! kubectl is now configured to use "addons-135346" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	06a10a794fe3e       dd1b12fcb6097       6 seconds ago        Exited              hello-world-app            2                   0c47a29231625       hello-world-app-5d77478584-vwdkh
	568abdf91754a       b8c82647e8a25       34 seconds ago       Running             nginx                      0                   c17b4ea5d25e1       nginx
	a0391b2ba1c14       6ef582f3ec844       About a minute ago   Running             gcp-auth                   0                   f2969225c4547       gcp-auth-7d69788767-hzdrr
	7aae98f571952       6727f8bc3105d       About a minute ago   Running             cloud-spanner-emulator     0                   1438f42f5d8eb       cloud-spanner-emulator-5446596998-5rrq9
	426139cfcbee1       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr   0                   8d368b8ba31be       nvidia-device-plugin-daemonset-6r9b9
	12d19143fb7f8       20e3f2db01e81       About a minute ago   Running             yakd                       0                   0ca0d56e7a6de       yakd-dashboard-9947fc6bf-jjjdt
	ba8c3743377b4       1a024e390dd05       About a minute ago   Exited              patch                      1                   126c9f33c5d9e       ingress-nginx-admission-patch-ljxsf
	e0aa7c4004d36       1a024e390dd05       About a minute ago   Exited              create                     0                   85a701295bb01       ingress-nginx-admission-create-g8bt8
	e143a8b723b57       2437cf7621777       About a minute ago   Running             coredns                    0                   3a77ff6e23297       coredns-76f75df574-rg6qv
	265ebc99e9fb0       7ce2150c8929b       About a minute ago   Running             local-path-provisioner     0                   ee35582b18155       local-path-provisioner-78b46b4d5c-9n4xx
	44de166219b56       ba04bb24b9575       2 minutes ago        Running             storage-provisioner        0                   8a2a092d56c6a       storage-provisioner
	1a1b6dfa068e8       0e9b4a0d1e86d       2 minutes ago        Running             kube-proxy                 0                   a724e2d928c88       kube-proxy-xcxjd
	6e2634dfb27a6       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                0                   bf13ee2ff5d46       kindnet-7zx7h
	17c9dfc6c5674       121d70d9a3805       2 minutes ago        Running             kube-controller-manager    0                   a904eb00e689d       kube-controller-manager-addons-135346
	d00c79f91a13b       2581114f5709d       2 minutes ago        Running             kube-apiserver             0                   0833acbda212a       kube-apiserver-addons-135346
	10691a5abe96c       014faa467e297       2 minutes ago        Running             etcd                       0                   202921aa2f30d       etcd-addons-135346
	7b27890fa979d       4b51f9f6bc9b9       2 minutes ago        Running             kube-scheduler             0                   9d66e4e339059       kube-scheduler-addons-135346
	
	
	==> containerd <==
	Mar 27 22:05:54 addons-135346 containerd[766]: time="2024-03-27T22:05:54.283702834Z" level=info msg="StartContainer for \"06a10a794fe3e4cdeeddb32ed80c4884670b4e6dfc9e65d90c3492e06e33c3cd\""
	Mar 27 22:05:54 addons-135346 containerd[766]: time="2024-03-27T22:05:54.336082293Z" level=info msg="StartContainer for \"06a10a794fe3e4cdeeddb32ed80c4884670b4e6dfc9e65d90c3492e06e33c3cd\" returns successfully"
	Mar 27 22:05:54 addons-135346 containerd[766]: time="2024-03-27T22:05:54.361532683Z" level=info msg="shim disconnected" id=06a10a794fe3e4cdeeddb32ed80c4884670b4e6dfc9e65d90c3492e06e33c3cd
	Mar 27 22:05:54 addons-135346 containerd[766]: time="2024-03-27T22:05:54.361593588Z" level=warning msg="cleaning up after shim disconnected" id=06a10a794fe3e4cdeeddb32ed80c4884670b4e6dfc9e65d90c3492e06e33c3cd namespace=k8s.io
	Mar 27 22:05:54 addons-135346 containerd[766]: time="2024-03-27T22:05:54.361605535Z" level=info msg="cleaning up dead shim"
	Mar 27 22:05:54 addons-135346 containerd[766]: time="2024-03-27T22:05:54.371248291Z" level=warning msg="cleanup warnings time=\"2024-03-27T22:05:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10160 runtime=io.containerd.runc.v2\n"
	Mar 27 22:05:54 addons-135346 containerd[766]: time="2024-03-27T22:05:54.480065322Z" level=info msg="RemoveContainer for \"dac6f1854e8161e01f54f3011bc9c68647ee38d0270052a519def7589d940e52\""
	Mar 27 22:05:54 addons-135346 containerd[766]: time="2024-03-27T22:05:54.493869531Z" level=info msg="RemoveContainer for \"dac6f1854e8161e01f54f3011bc9c68647ee38d0270052a519def7589d940e52\" returns successfully"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.284591698Z" level=info msg="Kill container \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\""
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.348948135Z" level=info msg="shim disconnected" id=482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.349017196Z" level=warning msg="cleaning up after shim disconnected" id=482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d namespace=k8s.io
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.349028576Z" level=info msg="cleaning up dead shim"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.359186575Z" level=warning msg="cleanup warnings time=\"2024-03-27T22:05:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10191 runtime=io.containerd.runc.v2\n"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.363283530Z" level=info msg="StopContainer for \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\" returns successfully"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.363932193Z" level=info msg="StopPodSandbox for \"354feac5fa3821ba077862484ad870ee2f740399351cf56bb907a0bec16dab14\""
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.364003756Z" level=info msg="Container to stop \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.394215887Z" level=info msg="shim disconnected" id=354feac5fa3821ba077862484ad870ee2f740399351cf56bb907a0bec16dab14
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.394281247Z" level=warning msg="cleaning up after shim disconnected" id=354feac5fa3821ba077862484ad870ee2f740399351cf56bb907a0bec16dab14 namespace=k8s.io
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.394294022Z" level=info msg="cleaning up dead shim"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.402315038Z" level=warning msg="cleanup warnings time=\"2024-03-27T22:05:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10224 runtime=io.containerd.runc.v2\n"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.449761430Z" level=info msg="TearDown network for sandbox \"354feac5fa3821ba077862484ad870ee2f740399351cf56bb907a0bec16dab14\" successfully"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.449817732Z" level=info msg="StopPodSandbox for \"354feac5fa3821ba077862484ad870ee2f740399351cf56bb907a0bec16dab14\" returns successfully"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.488454592Z" level=info msg="RemoveContainer for \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\""
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.495882672Z" level=info msg="RemoveContainer for \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\" returns successfully"
	Mar 27 22:05:55 addons-135346 containerd[766]: time="2024-03-27T22:05:55.496493462Z" level=error msg="ContainerStatus for \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\": not found"
	
	
	==> coredns [e143a8b723b57b64028b241726039226abad1f33f707b6834b882e3d0531421c] <==
	[INFO] 10.244.0.19:41447 - 45853 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057508s
	[INFO] 10.244.0.19:41447 - 34507 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048269s
	[INFO] 10.244.0.19:41447 - 55855 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060987s
	[INFO] 10.244.0.19:41447 - 9936 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042583s
	[INFO] 10.244.0.19:41447 - 41653 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001173318s
	[INFO] 10.244.0.19:41447 - 5127 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001062379s
	[INFO] 10.244.0.19:41447 - 2271 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049952s
	[INFO] 10.244.0.19:49098 - 23330 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000143299s
	[INFO] 10.244.0.19:49692 - 12739 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080655s
	[INFO] 10.244.0.19:49098 - 13352 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100042s
	[INFO] 10.244.0.19:49098 - 16895 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057483s
	[INFO] 10.244.0.19:49692 - 58431 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000275356s
	[INFO] 10.244.0.19:49098 - 44951 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000168135s
	[INFO] 10.244.0.19:49692 - 58972 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000080744s
	[INFO] 10.244.0.19:49692 - 2152 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071284s
	[INFO] 10.244.0.19:49098 - 47748 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000507613s
	[INFO] 10.244.0.19:49692 - 50589 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045455s
	[INFO] 10.244.0.19:49692 - 21538 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000125896s
	[INFO] 10.244.0.19:49098 - 47190 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000133076s
	[INFO] 10.244.0.19:49098 - 22487 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0012511s
	[INFO] 10.244.0.19:49692 - 60247 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001619501s
	[INFO] 10.244.0.19:49692 - 24452 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001226149s
	[INFO] 10.244.0.19:49098 - 36433 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001309494s
	[INFO] 10.244.0.19:49098 - 17205 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071186s
	[INFO] 10.244.0.19:49692 - 35792 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000420017s
	
	
	==> describe nodes <==
	Name:               addons-135346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-135346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81
	                    minikube.k8s.io/name=addons-135346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T22_03_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-135346
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 22:03:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-135346
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 22:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 22:05:47 +0000   Wed, 27 Mar 2024 22:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 22:05:47 +0000   Wed, 27 Mar 2024 22:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 22:05:47 +0000   Wed, 27 Mar 2024 22:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 22:05:47 +0000   Wed, 27 Mar 2024 22:03:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-135346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb564f11661d402b8ffc6c866bcb7c4b
	  System UUID:                34edcb58-6db0-417d-ab25-11cf6209ce4f
	  Boot ID:                    3ced2ab6-f576-451e-8762-49421fd13f89
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-5rrq9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  default                     hello-world-app-5d77478584-vwdkh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  gcp-auth                    gcp-auth-7d69788767-hzdrr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 coredns-76f75df574-rg6qv                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m34s
	  kube-system                 etcd-addons-135346                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m47s
	  kube-system                 kindnet-7zx7h                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m34s
	  kube-system                 kube-apiserver-addons-135346               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 kube-controller-manager-addons-135346      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 kube-proxy-xcxjd                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-scheduler-addons-135346               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 nvidia-device-plugin-daemonset-6r9b9       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  local-path-storage          local-path-provisioner-78b46b4d5c-9n4xx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-jjjdt             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m55s (x8 over 2m55s)  kubelet          Node addons-135346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s (x8 over 2m55s)  kubelet          Node addons-135346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x7 over 2m55s)  kubelet          Node addons-135346 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m47s                  kubelet          Node addons-135346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s                  kubelet          Node addons-135346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s                  kubelet          Node addons-135346 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m47s                  kubelet          Node addons-135346 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m37s                  kubelet          Node addons-135346 status is now: NodeReady
	  Normal  RegisteredNode           2m35s                  node-controller  Node addons-135346 event: Registered Node addons-135346 in Controller
	
	
	==> dmesg <==
	[  +0.000698] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000942] FS-Cache: N-cookie d=00000000b6bf5524{9p.inode} n=00000000c604d612
	[  +0.001078] FS-Cache: N-key=[8] 'f172ed0000000000'
	[  +3.398009] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=00000000b6bf5524{9p.inode} n=000000002653e2eb
	[  +0.001155] FS-Cache: O-key=[8] 'f072ed0000000000'
	[  +0.000743] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000919] FS-Cache: N-cookie d=00000000b6bf5524{9p.inode} n=00000000d6e7d61d
	[  +0.001040] FS-Cache: N-key=[8] 'f072ed0000000000'
	[  +0.335523] FS-Cache: Duplicate cookie detected
	[  +0.000791] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000962] FS-Cache: O-cookie d=00000000b6bf5524{9p.inode} n=000000009c700054
	[  +0.001052] FS-Cache: O-key=[8] 'f672ed0000000000'
	[  +0.000725] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000992] FS-Cache: N-cookie d=00000000b6bf5524{9p.inode} n=00000000606cfcf0
	[  +0.001114] FS-Cache: N-key=[8] 'f672ed0000000000'
	[  +3.918650] FS-Cache: Duplicate cookie detected
	[  +0.000811] FS-Cache: O-cookie c=00000049 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=00000000940205da{9P.session} n=00000000df0f813e
	[  +0.001085] FS-Cache: O-key=[10] '34323939303836393631'
	[  +0.000799] FS-Cache: N-cookie c=0000004a [p=00000002 fl=2 nc=0 na=1]
	[  +0.001097] FS-Cache: N-cookie d=00000000940205da{9P.session} n=0000000002f7d2fb
	[  +0.001141] FS-Cache: N-key=[10] '34323939303836393631'
	[Mar27 21:19] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [10691a5abe96cef1971a6027f41a1509fb7c6f0167cac2dd7c70e11bb91a9408] <==
	{"level":"info","ts":"2024-03-27T22:03:07.333487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-27T22:03:07.338401Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-27T22:03:07.375104Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-27T22:03:07.375575Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-27T22:03:07.375697Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-27T22:03:07.376052Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-27T22:03:07.376149Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-27T22:03:08.189286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-27T22:03:08.18953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-27T22:03:08.189624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-27T22:03:08.18972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-27T22:03:08.189796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-27T22:03:08.189861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-27T22:03:08.189938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-27T22:03:08.194638Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-135346 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T22:03:08.194953Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T22:03:08.195143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T22:03:08.195505Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T22:03:08.197317Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-27T22:03:08.198452Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T22:03:08.19858Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T22:03:08.198697Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T22:03:08.198882Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T22:03:08.198982Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T22:03:08.205896Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [a0391b2ba1c1474aad1a3dff8013c5bb1e44875dec719b9ac7d170d5c69ad8b2] <==
	2024/03/27 22:04:47 GCP Auth Webhook started!
	2024/03/27 22:05:01 Ready to marshal response ...
	2024/03/27 22:05:01 Ready to write response ...
	2024/03/27 22:05:13 Ready to marshal response ...
	2024/03/27 22:05:13 Ready to write response ...
	2024/03/27 22:05:24 Ready to marshal response ...
	2024/03/27 22:05:24 Ready to write response ...
	2024/03/27 22:05:34 Ready to marshal response ...
	2024/03/27 22:05:34 Ready to write response ...
	2024/03/27 22:05:34 Ready to marshal response ...
	2024/03/27 22:05:34 Ready to write response ...
	
	
	==> kernel <==
	 22:06:01 up  5:48,  0 users,  load average: 1.22, 2.09, 2.89
	Linux addons-135346 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [6e2634dfb27a68356d7b83a273f1ff8ce52f07bd8fafefc51b42b96818046985] <==
	I0327 22:03:58.680232       1 main.go:227] handling current node
	I0327 22:04:08.693645       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:04:08.693670       1 main.go:227] handling current node
	I0327 22:04:18.705881       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:04:18.705907       1 main.go:227] handling current node
	I0327 22:04:28.718606       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:04:28.718631       1 main.go:227] handling current node
	I0327 22:04:38.731281       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:04:38.731321       1 main.go:227] handling current node
	I0327 22:04:48.756983       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:04:48.757011       1 main.go:227] handling current node
	I0327 22:04:58.760994       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:04:58.761024       1 main.go:227] handling current node
	I0327 22:05:08.770281       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:05:08.770309       1 main.go:227] handling current node
	I0327 22:05:18.785012       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:05:18.785255       1 main.go:227] handling current node
	I0327 22:05:28.789803       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:05:28.789837       1 main.go:227] handling current node
	I0327 22:05:38.802536       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:05:38.802563       1 main.go:227] handling current node
	I0327 22:05:48.807704       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:05:48.807731       1 main.go:227] handling current node
	I0327 22:05:58.811548       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 22:05:58.811578       1 main.go:227] handling current node
	
	
	==> kube-apiserver [d00c79f91a13be9ea1d062db9f016cc41b346136724c5f12bd221dff83865072] <==
	E0327 22:04:28.035947       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0327 22:04:28.077763       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0327 22:04:28.140368       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0327 22:05:18.759229       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0327 22:05:19.788788       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0327 22:05:21.469349       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0327 22:05:24.380942       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0327 22:05:24.704723       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.173.252"}
	I0327 22:05:29.046237       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0327 22:05:34.595237       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.176.101"}
	E0327 22:05:52.328523       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0327 22:05:52.668008       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 22:05:52.668061       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 22:05:52.695960       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 22:05:52.696007       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 22:05:52.721975       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 22:05:52.722023       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 22:05:52.734715       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 22:05:52.734957       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 22:05:52.756716       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 22:05:52.756950       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0327 22:05:53.722296       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0327 22:05:53.757835       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0327 22:05:53.770678       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	W0327 22:05:53.779392       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [17c9dfc6c567466586c53a0fd459255f1af536bbd782cd01ef726a91f8425bef] <==
	I0327 22:05:52.256738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="8.55µs"
	I0327 22:05:52.267127       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0327 22:05:52.804568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="16.238µs"
	E0327 22:05:53.724089       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:53.759691       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:53.781229       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 22:05:54.491786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.772µs"
	W0327 22:05:54.659775       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:54.659813       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 22:05:55.180127       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:55.180165       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 22:05:55.195101       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:55.195137       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 22:05:56.689252       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:56.689290       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 22:05:56.700869       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0327 22:05:56.700909       1 shared_informer.go:318] Caches are synced for resource quota
	I0327 22:05:57.113545       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0327 22:05:57.113595       1 shared_informer.go:318] Caches are synced for garbage collector
	W0327 22:05:57.360462       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:57.360564       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 22:05:57.535781       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:57.535887       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 22:05:59.785539       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 22:05:59.785571       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [1a1b6dfa068e8d7b7e1038504243949ac7ffc0c69e2b6451256758d7a0cc0297] <==
	I0327 22:03:28.702274       1 server_others.go:72] "Using iptables proxy"
	I0327 22:03:28.716022       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0327 22:03:28.830060       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0327 22:03:28.830092       1 server_others.go:168] "Using iptables Proxier"
	I0327 22:03:28.832265       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0327 22:03:28.832288       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0327 22:03:28.832335       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 22:03:28.832577       1 server.go:865] "Version info" version="v1.29.3"
	I0327 22:03:28.832592       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 22:03:28.833478       1 config.go:188] "Starting service config controller"
	I0327 22:03:28.833509       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 22:03:28.833528       1 config.go:97] "Starting endpoint slice config controller"
	I0327 22:03:28.833532       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 22:03:28.835274       1 config.go:315] "Starting node config controller"
	I0327 22:03:28.835293       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 22:03:28.934021       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 22:03:28.934107       1 shared_informer.go:318] Caches are synced for service config
	I0327 22:03:28.935794       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7b27890fa979db3d7ac07b1efe9683f2dec675353995f5c064accc5689a535dd] <==
	W0327 22:03:11.205326       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 22:03:11.205478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 22:03:12.036852       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 22:03:12.036891       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 22:03:12.053817       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 22:03:12.053869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 22:03:12.056494       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 22:03:12.056542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 22:03:12.146758       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 22:03:12.146877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 22:03:12.157574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 22:03:12.157948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 22:03:12.277885       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 22:03:12.278280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 22:03:12.339748       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 22:03:12.339790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0327 22:03:12.342149       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 22:03:12.342505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 22:03:12.373954       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 22:03:12.374240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 22:03:12.412836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 22:03:12.412928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 22:03:12.452581       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 22:03:12.452810       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0327 22:03:14.775614       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 22:05:53 addons-135346 kubelet[1482]: E0327 22:05:53.476735    1482 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca1af9597ff6afcb6de3c6c152deb36286cbb6fa9158f2e4ca70d2216a6b9322\": not found" containerID="ca1af9597ff6afcb6de3c6c152deb36286cbb6fa9158f2e4ca70d2216a6b9322"
	Mar 27 22:05:53 addons-135346 kubelet[1482]: I0327 22:05:53.476877    1482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca1af9597ff6afcb6de3c6c152deb36286cbb6fa9158f2e4ca70d2216a6b9322"} err="failed to get container status \"ca1af9597ff6afcb6de3c6c152deb36286cbb6fa9158f2e4ca70d2216a6b9322\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca1af9597ff6afcb6de3c6c152deb36286cbb6fa9158f2e4ca70d2216a6b9322\": not found"
	Mar 27 22:05:53 addons-135346 kubelet[1482]: I0327 22:05:53.476906    1482 scope.go:117] "RemoveContainer" containerID="1f8be71145138d2c49dcf76c6a7c6e8f3af20f73144814cca92f71c9c408dc0d"
	Mar 27 22:05:53 addons-135346 kubelet[1482]: I0327 22:05:53.487503    1482 scope.go:117] "RemoveContainer" containerID="1f8be71145138d2c49dcf76c6a7c6e8f3af20f73144814cca92f71c9c408dc0d"
	Mar 27 22:05:53 addons-135346 kubelet[1482]: E0327 22:05:53.489478    1482 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f8be71145138d2c49dcf76c6a7c6e8f3af20f73144814cca92f71c9c408dc0d\": not found" containerID="1f8be71145138d2c49dcf76c6a7c6e8f3af20f73144814cca92f71c9c408dc0d"
	Mar 27 22:05:53 addons-135346 kubelet[1482]: I0327 22:05:53.489523    1482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f8be71145138d2c49dcf76c6a7c6e8f3af20f73144814cca92f71c9c408dc0d"} err="failed to get container status \"1f8be71145138d2c49dcf76c6a7c6e8f3af20f73144814cca92f71c9c408dc0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f8be71145138d2c49dcf76c6a7c6e8f3af20f73144814cca92f71c9c408dc0d\": not found"
	Mar 27 22:05:54 addons-135346 kubelet[1482]: I0327 22:05:54.251040    1482 scope.go:117] "RemoveContainer" containerID="dac6f1854e8161e01f54f3011bc9c68647ee38d0270052a519def7589d940e52"
	Mar 27 22:05:54 addons-135346 kubelet[1482]: I0327 22:05:54.258726    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2200990d-0851-4e03-95c9-10b88d1b37ac" path="/var/lib/kubelet/pods/2200990d-0851-4e03-95c9-10b88d1b37ac/volumes"
	Mar 27 22:05:54 addons-135346 kubelet[1482]: I0327 22:05:54.259201    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281dcc87-2d0f-413e-a2e8-35d48ace4a1e" path="/var/lib/kubelet/pods/281dcc87-2d0f-413e-a2e8-35d48ace4a1e/volumes"
	Mar 27 22:05:54 addons-135346 kubelet[1482]: I0327 22:05:54.259719    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71fa7bb9-593c-414e-a210-f3230570440e" path="/var/lib/kubelet/pods/71fa7bb9-593c-414e-a210-f3230570440e/volumes"
	Mar 27 22:05:54 addons-135346 kubelet[1482]: I0327 22:05:54.260285    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0eeb53a-1fbf-45a9-b3e6-38cb5733e653" path="/var/lib/kubelet/pods/f0eeb53a-1fbf-45a9-b3e6-38cb5733e653/volumes"
	Mar 27 22:05:54 addons-135346 kubelet[1482]: I0327 22:05:54.476733    1482 scope.go:117] "RemoveContainer" containerID="dac6f1854e8161e01f54f3011bc9c68647ee38d0270052a519def7589d940e52"
	Mar 27 22:05:54 addons-135346 kubelet[1482]: I0327 22:05:54.477633    1482 scope.go:117] "RemoveContainer" containerID="06a10a794fe3e4cdeeddb32ed80c4884670b4e6dfc9e65d90c3492e06e33c3cd"
	Mar 27 22:05:54 addons-135346 kubelet[1482]: E0327 22:05:54.478041    1482 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-vwdkh_default(edfa84c7-889b-4730-9176-ac120e0deec6)\"" pod="default/hello-world-app-5d77478584-vwdkh" podUID="edfa84c7-889b-4730-9176-ac120e0deec6"
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.484403    1482 scope.go:117] "RemoveContainer" containerID="482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d"
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.496198    1482 scope.go:117] "RemoveContainer" containerID="482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d"
	Mar 27 22:05:55 addons-135346 kubelet[1482]: E0327 22:05:55.496704    1482 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\": not found" containerID="482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d"
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.496769    1482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d"} err="failed to get container status \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\": rpc error: code = NotFound desc = an error occurred when try to find container \"482609d77bd8dfd7592a951ba320125265b48fdffeda730091be2fc97b30493d\": not found"
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.628759    1482 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jrr6\" (UniqueName: \"kubernetes.io/projected/3edd8557-2a2d-4a33-b8fe-902d8a998c12-kube-api-access-2jrr6\") pod \"3edd8557-2a2d-4a33-b8fe-902d8a998c12\" (UID: \"3edd8557-2a2d-4a33-b8fe-902d8a998c12\") "
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.628822    1482 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3edd8557-2a2d-4a33-b8fe-902d8a998c12-webhook-cert\") pod \"3edd8557-2a2d-4a33-b8fe-902d8a998c12\" (UID: \"3edd8557-2a2d-4a33-b8fe-902d8a998c12\") "
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.631248    1482 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3edd8557-2a2d-4a33-b8fe-902d8a998c12-kube-api-access-2jrr6" (OuterVolumeSpecName: "kube-api-access-2jrr6") pod "3edd8557-2a2d-4a33-b8fe-902d8a998c12" (UID: "3edd8557-2a2d-4a33-b8fe-902d8a998c12"). InnerVolumeSpecName "kube-api-access-2jrr6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.635025    1482 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3edd8557-2a2d-4a33-b8fe-902d8a998c12-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3edd8557-2a2d-4a33-b8fe-902d8a998c12" (UID: "3edd8557-2a2d-4a33-b8fe-902d8a998c12"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.729493    1482 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2jrr6\" (UniqueName: \"kubernetes.io/projected/3edd8557-2a2d-4a33-b8fe-902d8a998c12-kube-api-access-2jrr6\") on node \"addons-135346\" DevicePath \"\""
	Mar 27 22:05:55 addons-135346 kubelet[1482]: I0327 22:05:55.729539    1482 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3edd8557-2a2d-4a33-b8fe-902d8a998c12-webhook-cert\") on node \"addons-135346\" DevicePath \"\""
	Mar 27 22:05:56 addons-135346 kubelet[1482]: I0327 22:05:56.254699    1482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3edd8557-2a2d-4a33-b8fe-902d8a998c12" path="/var/lib/kubelet/pods/3edd8557-2a2d-4a33-b8fe-902d8a998c12/volumes"
	
	
	==> storage-provisioner [44de166219b561b616c6ebe0f0ab599bc6797ee87fa58332c7b87ec579c23663] <==
	I0327 22:03:34.339412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 22:03:34.359779       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 22:03:34.359825       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 22:03:34.376279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 22:03:34.376436       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-135346_f52d0de3-778b-40e9-821d-264df1ffc5c5!
	I0327 22:03:34.377315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e6687360-1ac4-424a-bb58-fff1ee47de45", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-135346_f52d0de3-778b-40e9-821d-264df1ffc5c5 became leader
	I0327 22:03:34.476600       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-135346_f52d0de3-778b-40e9-821d-264df1ffc5c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-135346 -n addons-135346
helpers_test.go:261: (dbg) Run:  kubectl --context addons-135346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image load --daemon gcr.io/google-containers/addon-resizer:functional-057506 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 image load --daemon gcr.io/google-containers/addon-resizer:functional-057506 --alsologtostderr: (4.445409158s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-057506" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image load --daemon gcr.io/google-containers/addon-resizer:functional-057506 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 image load --daemon gcr.io/google-containers/addon-resizer:functional-057506 --alsologtostderr: (3.376056605s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-057506" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.341877519s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-057506
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image load --daemon gcr.io/google-containers/addon-resizer:functional-057506 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 image load --daemon gcr.io/google-containers/addon-resizer:functional-057506 --alsologtostderr: (3.162603085s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-057506" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image save gcr.io/google-containers/addon-resizer:functional-057506 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0327 22:12:25.451847 1450872 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:12:25.452450 1450872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:12:25.452463 1450872 out.go:304] Setting ErrFile to fd 2...
	I0327 22:12:25.452468 1450872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:12:25.452709 1450872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:12:25.453341 1450872 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:12:25.453464 1450872 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:12:25.453956 1450872 cli_runner.go:164] Run: docker container inspect functional-057506 --format={{.State.Status}}
	I0327 22:12:25.470201 1450872 ssh_runner.go:195] Run: systemctl --version
	I0327 22:12:25.470297 1450872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-057506
	I0327 22:12:25.485948 1450872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/functional-057506/id_rsa Username:docker}
	I0327 22:12:25.575082 1450872 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0327 22:12:25.575172 1450872 cache_images.go:254] Failed to load cached images for profile functional-057506. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0327 22:12:25.575196 1450872 cache_images.go:262] succeeded pushing to: 
	I0327 22:12:25.575201 1450872 cache_images.go:263] failed pushing to: functional-057506

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-195171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0327 22:49:48.784729 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-195171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m9.677626284s)

                                                
                                                
-- stdout --
	* [old-k8s-version-195171] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-195171" primary control-plane node in "old-k8s-version-195171" cluster
	* Pulling base image v0.0.43-beta.0 ...
	* Restarting existing docker container for "old-k8s-version-195171" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-195171 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 22:49:47.437083 1612768 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:49:47.437223 1612768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:49:47.437234 1612768 out.go:304] Setting ErrFile to fd 2...
	I0327 22:49:47.437239 1612768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:49:47.437639 1612768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:49:47.439420 1612768 out.go:298] Setting JSON to false
	I0327 22:49:47.440418 1612768 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23525,"bootTime":1711556262,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:49:47.440515 1612768 start.go:139] virtualization:  
	I0327 22:49:47.444032 1612768 out.go:177] * [old-k8s-version-195171] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 22:49:47.447523 1612768 notify.go:220] Checking for updates...
	I0327 22:49:47.458236 1612768 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 22:49:47.461104 1612768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:49:47.463066 1612768 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:49:47.465168 1612768 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:49:47.467314 1612768 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 22:49:47.469637 1612768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 22:49:47.472184 1612768 config.go:182] Loaded profile config "old-k8s-version-195171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0327 22:49:47.474946 1612768 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 22:49:47.477237 1612768 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:49:47.511384 1612768 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:49:47.511520 1612768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:49:47.663597 1612768 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:67 SystemTime:2024-03-27 22:49:47.646867318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:49:47.663710 1612768 docker.go:295] overlay module found
	I0327 22:49:47.666077 1612768 out.go:177] * Using the docker driver based on existing profile
	I0327 22:49:47.667814 1612768 start.go:297] selected driver: docker
	I0327 22:49:47.667830 1612768 start.go:901] validating driver "docker" against &{Name:old-k8s-version-195171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195171 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:49:47.668020 1612768 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 22:49:47.668866 1612768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:49:47.803081 1612768 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:67 SystemTime:2024-03-27 22:49:47.793634371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:49:47.803427 1612768 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 22:49:47.803487 1612768 cni.go:84] Creating CNI manager for ""
	I0327 22:49:47.803503 1612768 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:49:47.803543 1612768 start.go:340] cluster config:
	{Name:old-k8s-version-195171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:49:47.805774 1612768 out.go:177] * Starting "old-k8s-version-195171" primary control-plane node in "old-k8s-version-195171" cluster
	I0327 22:49:47.807751 1612768 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 22:49:47.810942 1612768 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 22:49:47.814047 1612768 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 22:49:47.814170 1612768 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 22:49:47.814546 1612768 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0327 22:49:47.814561 1612768 cache.go:56] Caching tarball of preloaded images
	I0327 22:49:47.814749 1612768 preload.go:173] Found /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 22:49:47.814763 1612768 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0327 22:49:47.814938 1612768 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/config.json ...
	I0327 22:49:47.836083 1612768 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon, skipping pull
	I0327 22:49:47.836105 1612768 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in daemon, skipping load
	I0327 22:49:47.836183 1612768 cache.go:194] Successfully downloaded all kic artifacts
	I0327 22:49:47.836256 1612768 start.go:360] acquireMachinesLock for old-k8s-version-195171: {Name:mk90e4af74ece868eb48f48517a9c6c6a18e2ab6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 22:49:47.836366 1612768 start.go:364] duration metric: took 53.66µs to acquireMachinesLock for "old-k8s-version-195171"
	I0327 22:49:47.836426 1612768 start.go:96] Skipping create...Using existing machine configuration
	I0327 22:49:47.836436 1612768 fix.go:54] fixHost starting: 
	I0327 22:49:47.836862 1612768 cli_runner.go:164] Run: docker container inspect old-k8s-version-195171 --format={{.State.Status}}
	I0327 22:49:47.861052 1612768 fix.go:112] recreateIfNeeded on old-k8s-version-195171: state=Stopped err=<nil>
	W0327 22:49:47.861139 1612768 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 22:49:47.863751 1612768 out.go:177] * Restarting existing docker container for "old-k8s-version-195171" ...
	I0327 22:49:47.866446 1612768 cli_runner.go:164] Run: docker start old-k8s-version-195171
	I0327 22:49:48.263268 1612768 cli_runner.go:164] Run: docker container inspect old-k8s-version-195171 --format={{.State.Status}}
	I0327 22:49:48.288925 1612768 kic.go:430] container "old-k8s-version-195171" state is running.
	I0327 22:49:48.289305 1612768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-195171
	I0327 22:49:48.312904 1612768 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/config.json ...
	I0327 22:49:48.313124 1612768 machine.go:94] provisionDockerMachine start ...
	I0327 22:49:48.313185 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:48.337420 1612768 main.go:141] libmachine: Using SSH client type: native
	I0327 22:49:48.337688 1612768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0327 22:49:48.337698 1612768 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 22:49:48.339313 1612768 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0327 22:49:51.465802 1612768 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-195171
	
	I0327 22:49:51.465823 1612768 ubuntu.go:169] provisioning hostname "old-k8s-version-195171"
	I0327 22:49:51.465895 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:51.488192 1612768 main.go:141] libmachine: Using SSH client type: native
	I0327 22:49:51.488440 1612768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0327 22:49:51.488452 1612768 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-195171 && echo "old-k8s-version-195171" | sudo tee /etc/hostname
	I0327 22:49:51.644606 1612768 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-195171
	
	I0327 22:49:51.644687 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:51.664242 1612768 main.go:141] libmachine: Using SSH client type: native
	I0327 22:49:51.664488 1612768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0327 22:49:51.664512 1612768 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-195171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-195171/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-195171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 22:49:51.790253 1612768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 22:49:51.790282 1612768 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17735-1410709/.minikube CaCertPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17735-1410709/.minikube}
	I0327 22:49:51.790311 1612768 ubuntu.go:177] setting up certificates
	I0327 22:49:51.790320 1612768 provision.go:84] configureAuth start
	I0327 22:49:51.790381 1612768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-195171
	I0327 22:49:51.807095 1612768 provision.go:143] copyHostCerts
	I0327 22:49:51.807165 1612768 exec_runner.go:144] found /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.pem, removing ...
	I0327 22:49:51.807189 1612768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.pem
	I0327 22:49:51.807263 1612768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.pem (1078 bytes)
	I0327 22:49:51.807368 1612768 exec_runner.go:144] found /home/jenkins/minikube-integration/17735-1410709/.minikube/cert.pem, removing ...
	I0327 22:49:51.807378 1612768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17735-1410709/.minikube/cert.pem
	I0327 22:49:51.807407 1612768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17735-1410709/.minikube/cert.pem (1123 bytes)
	I0327 22:49:51.807464 1612768 exec_runner.go:144] found /home/jenkins/minikube-integration/17735-1410709/.minikube/key.pem, removing ...
	I0327 22:49:51.807473 1612768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17735-1410709/.minikube/key.pem
	I0327 22:49:51.807497 1612768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17735-1410709/.minikube/key.pem (1675 bytes)
	I0327 22:49:51.807584 1612768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-195171 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-195171]
	I0327 22:49:52.812764 1612768 provision.go:177] copyRemoteCerts
	I0327 22:49:52.812881 1612768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 22:49:52.813009 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:52.829330 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:52.924103 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 22:49:52.956017 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0327 22:49:53.001045 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 22:49:53.033360 1612768 provision.go:87] duration metric: took 1.243022075s to configureAuth
	I0327 22:49:53.033385 1612768 ubuntu.go:193] setting minikube options for container-runtime
	I0327 22:49:53.033589 1612768 config.go:182] Loaded profile config "old-k8s-version-195171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0327 22:49:53.033597 1612768 machine.go:97] duration metric: took 4.720466072s to provisionDockerMachine
	I0327 22:49:53.033605 1612768 start.go:293] postStartSetup for "old-k8s-version-195171" (driver="docker")
	I0327 22:49:53.033615 1612768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 22:49:53.033670 1612768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 22:49:53.033716 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:53.055058 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:53.152586 1612768 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 22:49:53.156146 1612768 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 22:49:53.156221 1612768 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 22:49:53.156257 1612768 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 22:49:53.156280 1612768 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0327 22:49:53.156306 1612768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-1410709/.minikube/addons for local assets ...
	I0327 22:49:53.156384 1612768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-1410709/.minikube/files for local assets ...
	I0327 22:49:53.156487 1612768 filesync.go:149] local asset: /home/jenkins/minikube-integration/17735-1410709/.minikube/files/etc/ssl/certs/14161272.pem -> 14161272.pem in /etc/ssl/certs
	I0327 22:49:53.156633 1612768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 22:49:53.165801 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/files/etc/ssl/certs/14161272.pem --> /etc/ssl/certs/14161272.pem (1708 bytes)
	I0327 22:49:53.196704 1612768 start.go:296] duration metric: took 163.083654ms for postStartSetup
	I0327 22:49:53.196829 1612768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 22:49:53.196899 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:53.215863 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:53.303514 1612768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0327 22:49:53.311699 1612768 fix.go:56] duration metric: took 5.475255621s for fixHost
	I0327 22:49:53.311721 1612768 start.go:83] releasing machines lock for "old-k8s-version-195171", held for 5.475310898s
	I0327 22:49:53.311808 1612768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-195171
	I0327 22:49:53.328060 1612768 ssh_runner.go:195] Run: cat /version.json
	I0327 22:49:53.328149 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:53.328392 1612768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 22:49:53.328448 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:53.359731 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:53.363536 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:53.564427 1612768 ssh_runner.go:195] Run: systemctl --version
	I0327 22:49:53.568976 1612768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 22:49:53.573492 1612768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0327 22:49:53.592029 1612768 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0327 22:49:53.592153 1612768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 22:49:53.601659 1612768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0327 22:49:53.601728 1612768 start.go:494] detecting cgroup driver to use...
	I0327 22:49:53.601782 1612768 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 22:49:53.601852 1612768 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 22:49:53.617005 1612768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 22:49:53.630364 1612768 docker.go:217] disabling cri-docker service (if available) ...
	I0327 22:49:53.630501 1612768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 22:49:53.644726 1612768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 22:49:53.657312 1612768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 22:49:53.777061 1612768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 22:49:53.885041 1612768 docker.go:233] disabling docker service ...
	I0327 22:49:53.885153 1612768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 22:49:53.897802 1612768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 22:49:53.910921 1612768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 22:49:54.026536 1612768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 22:49:54.146070 1612768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 22:49:54.160998 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 22:49:54.180227 1612768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0327 22:49:54.191335 1612768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 22:49:54.202500 1612768 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 22:49:54.202626 1612768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 22:49:54.213429 1612768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 22:49:54.223959 1612768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 22:49:54.234811 1612768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 22:49:54.245407 1612768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 22:49:54.255549 1612768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 22:49:54.266075 1612768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 22:49:54.275716 1612768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 22:49:54.285053 1612768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 22:49:54.392305 1612768 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 22:49:54.616684 1612768 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0327 22:49:54.616753 1612768 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0327 22:49:54.622987 1612768 start.go:562] Will wait 60s for crictl version
	I0327 22:49:54.623086 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:49:54.629271 1612768 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 22:49:54.709185 1612768 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0327 22:49:54.709314 1612768 ssh_runner.go:195] Run: containerd --version
	I0327 22:49:54.732446 1612768 ssh_runner.go:195] Run: containerd --version
	I0327 22:49:54.759786 1612768 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0327 22:49:54.762028 1612768 cli_runner.go:164] Run: docker network inspect old-k8s-version-195171 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 22:49:54.784131 1612768 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0327 22:49:54.788175 1612768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 22:49:54.799502 1612768 kubeadm.go:877] updating cluster {Name:old-k8s-version-195171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 22:49:54.799617 1612768 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 22:49:54.799672 1612768 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 22:49:54.855294 1612768 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 22:49:54.855315 1612768 containerd.go:534] Images already preloaded, skipping extraction
	I0327 22:49:54.855390 1612768 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 22:49:54.905504 1612768 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 22:49:54.905573 1612768 cache_images.go:84] Images are preloaded, skipping loading
	I0327 22:49:54.905608 1612768 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0327 22:49:54.905782 1612768 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-195171 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 22:49:54.905883 1612768 ssh_runner.go:195] Run: sudo crictl info
	I0327 22:49:54.953613 1612768 cni.go:84] Creating CNI manager for ""
	I0327 22:49:54.953635 1612768 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:49:54.953648 1612768 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 22:49:54.953669 1612768 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-195171 NodeName:old-k8s-version-195171 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0327 22:49:54.953822 1612768 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-195171"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 22:49:54.953888 1612768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0327 22:49:54.963787 1612768 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 22:49:54.963929 1612768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 22:49:54.973420 1612768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0327 22:49:54.994739 1612768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 22:49:55.017816 1612768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0327 22:49:55.041732 1612768 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0327 22:49:55.046176 1612768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 22:49:55.058826 1612768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 22:49:55.170453 1612768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 22:49:55.189100 1612768 certs.go:68] Setting up /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171 for IP: 192.168.76.2
	I0327 22:49:55.189175 1612768 certs.go:194] generating shared ca certs ...
	I0327 22:49:55.189216 1612768 certs.go:226] acquiring lock for ca certs: {Name:mk24b20553d2a6654488b5498452cac9c2150bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:49:55.189416 1612768 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.key
	I0327 22:49:55.189501 1612768 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.key
	I0327 22:49:55.189528 1612768 certs.go:256] generating profile certs ...
	I0327 22:49:55.189680 1612768 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.key
	I0327 22:49:55.189857 1612768 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/apiserver.key.303c6f27
	I0327 22:49:55.189940 1612768 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/proxy-client.key
	I0327 22:49:55.190105 1612768 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/1416127.pem (1338 bytes)
	W0327 22:49:55.190179 1612768 certs.go:480] ignoring /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/1416127_empty.pem, impossibly tiny 0 bytes
	I0327 22:49:55.190222 1612768 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 22:49:55.190279 1612768 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem (1078 bytes)
	I0327 22:49:55.190360 1612768 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/cert.pem (1123 bytes)
	I0327 22:49:55.190446 1612768 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/key.pem (1675 bytes)
	I0327 22:49:55.190529 1612768 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-1410709/.minikube/files/etc/ssl/certs/14161272.pem (1708 bytes)
	I0327 22:49:55.191407 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 22:49:55.240868 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0327 22:49:55.307454 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 22:49:55.385241 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 22:49:55.418072 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 22:49:55.444344 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 22:49:55.470759 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 22:49:55.498038 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 22:49:55.524776 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 22:49:55.552500 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/1416127.pem --> /usr/share/ca-certificates/1416127.pem (1338 bytes)
	I0327 22:49:55.580099 1612768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-1410709/.minikube/files/etc/ssl/certs/14161272.pem --> /usr/share/ca-certificates/14161272.pem (1708 bytes)
	I0327 22:49:55.606569 1612768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 22:49:55.626093 1612768 ssh_runner.go:195] Run: openssl version
	I0327 22:49:55.632145 1612768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1416127.pem && ln -fs /usr/share/ca-certificates/1416127.pem /etc/ssl/certs/1416127.pem"
	I0327 22:49:55.642760 1612768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1416127.pem
	I0327 22:49:55.646896 1612768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 22:08 /usr/share/ca-certificates/1416127.pem
	I0327 22:49:55.647028 1612768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1416127.pem
	I0327 22:49:55.654606 1612768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1416127.pem /etc/ssl/certs/51391683.0"
	I0327 22:49:55.664316 1612768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14161272.pem && ln -fs /usr/share/ca-certificates/14161272.pem /etc/ssl/certs/14161272.pem"
	I0327 22:49:55.674132 1612768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14161272.pem
	I0327 22:49:55.678233 1612768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 22:08 /usr/share/ca-certificates/14161272.pem
	I0327 22:49:55.678378 1612768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14161272.pem
	I0327 22:49:55.686828 1612768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14161272.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 22:49:55.696811 1612768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 22:49:55.706985 1612768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 22:49:55.710985 1612768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 22:02 /usr/share/ca-certificates/minikubeCA.pem
	I0327 22:49:55.711130 1612768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 22:49:55.718679 1612768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 22:49:55.728620 1612768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 22:49:55.732764 1612768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 22:49:55.740170 1612768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 22:49:55.748042 1612768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 22:49:55.755564 1612768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 22:49:55.763015 1612768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 22:49:55.770285 1612768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 22:49:55.777723 1612768 kubeadm.go:391] StartCluster: {Name:old-k8s-version-195171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:49:55.777903 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0327 22:49:55.778001 1612768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 22:49:55.835612 1612768 cri.go:89] found id: "84593e4d98d96f90ffb2680fe57238a5c4cf80bf817dae59479a0c7a089ad239"
	I0327 22:49:55.835672 1612768 cri.go:89] found id: "8a9a5fdbf09ab73e2243456de6a49a853bfe4da4316b29ceb12fc0e6e518b207"
	I0327 22:49:55.835700 1612768 cri.go:89] found id: "d0b987e0a8e4f9e88a3d46d048c3c86ea1dcbd737d51ec21e43dd7730e08a76a"
	I0327 22:49:55.835718 1612768 cri.go:89] found id: "3242e57c5b9958b451fec095ec4db56864f938b6b05a2d40b3e313e85436386f"
	I0327 22:49:55.835752 1612768 cri.go:89] found id: "99491c7efa6ca74287288d2ca9dd78b3a2a5c1d801b69057db17e9237ab0e681"
	I0327 22:49:55.835777 1612768 cri.go:89] found id: "b6f637dff335b0a27399ab46defc429894613ed31ac5f4467f5d5fbb1a01af46"
	I0327 22:49:55.835795 1612768 cri.go:89] found id: "6d7bb312fba1e741ab92c5eb9a9f90d53b215e3c7a64f3a6790bb38e841c76ee"
	I0327 22:49:55.835812 1612768 cri.go:89] found id: "ad81f139258693b946d350f892a0a317ad916974d01cf75227e4cbfb1120ad94"
	I0327 22:49:55.835841 1612768 cri.go:89] found id: ""
	I0327 22:49:55.835923 1612768 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0327 22:49:55.849059 1612768 cri.go:116] JSON = null
	W0327 22:49:55.849160 1612768 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0327 22:49:55.849249 1612768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 22:49:55.859068 1612768 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 22:49:55.859142 1612768 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 22:49:55.859176 1612768 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 22:49:55.859258 1612768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 22:49:55.868296 1612768 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 22:49:55.868835 1612768 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-195171" does not appear in /home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:49:55.868999 1612768 kubeconfig.go:62] /home/jenkins/minikube-integration/17735-1410709/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-195171" cluster setting kubeconfig missing "old-k8s-version-195171" context setting]
	I0327 22:49:55.869353 1612768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/kubeconfig: {Name:mkbeaefc44aca3b944acccf918e2fc82ac53211f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:49:55.871013 1612768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 22:49:55.881577 1612768 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0327 22:49:55.881657 1612768 kubeadm.go:591] duration metric: took 22.440696ms to restartPrimaryControlPlane
	I0327 22:49:55.881684 1612768 kubeadm.go:393] duration metric: took 103.976583ms to StartCluster
	I0327 22:49:55.881732 1612768 settings.go:142] acquiring lock: {Name:mk0422242c5bd9a591643a1eff705818469bc24b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:49:55.881834 1612768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:49:55.882634 1612768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/kubeconfig: {Name:mkbeaefc44aca3b944acccf918e2fc82ac53211f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:49:55.882901 1612768 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 22:49:55.888158 1612768 out.go:177] * Verifying Kubernetes components...
	I0327 22:49:55.883305 1612768 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 22:49:55.883376 1612768 config.go:182] Loaded profile config "old-k8s-version-195171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0327 22:49:55.890486 1612768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 22:49:55.888478 1612768 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-195171"
	I0327 22:49:55.890709 1612768 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-195171"
	W0327 22:49:55.890718 1612768 addons.go:243] addon storage-provisioner should already be in state true
	I0327 22:49:55.890754 1612768 host.go:66] Checking if "old-k8s-version-195171" exists ...
	I0327 22:49:55.891185 1612768 cli_runner.go:164] Run: docker container inspect old-k8s-version-195171 --format={{.State.Status}}
	I0327 22:49:55.888488 1612768 addons.go:69] Setting dashboard=true in profile "old-k8s-version-195171"
	I0327 22:49:55.891452 1612768 addons.go:234] Setting addon dashboard=true in "old-k8s-version-195171"
	W0327 22:49:55.891472 1612768 addons.go:243] addon dashboard should already be in state true
	I0327 22:49:55.891527 1612768 host.go:66] Checking if "old-k8s-version-195171" exists ...
	I0327 22:49:55.891975 1612768 cli_runner.go:164] Run: docker container inspect old-k8s-version-195171 --format={{.State.Status}}
	I0327 22:49:55.888495 1612768 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-195171"
	I0327 22:49:55.892375 1612768 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-195171"
	I0327 22:49:55.892602 1612768 cli_runner.go:164] Run: docker container inspect old-k8s-version-195171 --format={{.State.Status}}
	I0327 22:49:55.888503 1612768 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-195171"
	I0327 22:49:55.895293 1612768 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-195171"
	W0327 22:49:55.895304 1612768 addons.go:243] addon metrics-server should already be in state true
	I0327 22:49:55.895339 1612768 host.go:66] Checking if "old-k8s-version-195171" exists ...
	I0327 22:49:55.895719 1612768 cli_runner.go:164] Run: docker container inspect old-k8s-version-195171 --format={{.State.Status}}
	I0327 22:49:55.946962 1612768 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-195171"
	W0327 22:49:55.946983 1612768 addons.go:243] addon default-storageclass should already be in state true
	I0327 22:49:55.947009 1612768 host.go:66] Checking if "old-k8s-version-195171" exists ...
	I0327 22:49:55.947424 1612768 cli_runner.go:164] Run: docker container inspect old-k8s-version-195171 --format={{.State.Status}}
	I0327 22:49:55.957044 1612768 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0327 22:49:55.959261 1612768 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0327 22:49:55.962480 1612768 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0327 22:49:55.962534 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0327 22:49:55.967947 1612768 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 22:49:55.967958 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0327 22:49:55.967964 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 22:49:55.968030 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:55.968031 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:55.962545 1612768 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 22:49:55.980735 1612768 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 22:49:55.980757 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 22:49:55.980820 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:56.004605 1612768 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 22:49:56.004632 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 22:49:56.004733 1612768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-195171
	I0327 22:49:56.010730 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:56.036545 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:56.052589 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:56.061413 1612768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/old-k8s-version-195171/id_rsa Username:docker}
	I0327 22:49:56.128464 1612768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 22:49:56.171183 1612768 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-195171" to be "Ready" ...
	I0327 22:49:56.225581 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0327 22:49:56.225607 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0327 22:49:56.260189 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 22:49:56.313334 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0327 22:49:56.313374 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0327 22:49:56.321948 1612768 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 22:49:56.321975 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0327 22:49:56.327670 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 22:49:56.403275 1612768 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 22:49:56.403301 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 22:49:56.407653 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0327 22:49:56.407678 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0327 22:49:56.492701 1612768 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 22:49:56.492729 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 22:49:56.522122 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0327 22:49:56.522155 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0327 22:49:56.594190 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:56.594242 1612768 retry.go:31] will retry after 127.475095ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:56.610235 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0327 22:49:56.627168 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:56.627208 1612768 retry.go:31] will retry after 199.658661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:56.637352 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0327 22:49:56.637389 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0327 22:49:56.699634 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0327 22:49:56.699666 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0327 22:49:56.722077 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 22:49:56.726505 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0327 22:49:56.726528 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0327 22:49:56.803998 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0327 22:49:56.804034 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0327 22:49:56.827233 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0327 22:49:56.838100 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:56.838133 1612768 retry.go:31] will retry after 367.941235ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:49:56.903420 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:56.903453 1612768 retry.go:31] will retry after 466.008966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:56.920191 1612768 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0327 22:49:56.920220 1612768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0327 22:49:56.996497 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:56.996532 1612768 retry.go:31] will retry after 371.742292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.001968 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0327 22:49:57.108145 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.108249 1612768 retry.go:31] will retry after 274.904068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.207053 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0327 22:49:57.319711 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.319796 1612768 retry.go:31] will retry after 508.402559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.369243 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0327 22:49:57.369843 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 22:49:57.384238 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0327 22:49:57.611028 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:49:57.611078 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.611090 1612768 retry.go:31] will retry after 508.805067ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.611107 1612768 retry.go:31] will retry after 769.903435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:49:57.661233 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.661271 1612768 retry.go:31] will retry after 211.390718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.829102 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 22:49:57.873570 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0327 22:49:57.979065 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:57.979172 1612768 retry.go:31] will retry after 321.462706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:49:58.043699 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:58.043791 1612768 retry.go:31] will retry after 309.995393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:58.120964 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0327 22:49:58.172799 1612768 node_ready.go:53] error getting node "old-k8s-version-195171": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-195171": dial tcp 192.168.76.2:8443: connect: connection refused
	W0327 22:49:58.226385 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:58.226512 1612768 retry.go:31] will retry after 1.162629417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:58.301845 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 22:49:58.354274 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0327 22:49:58.381602 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0327 22:49:58.439876 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:58.439927 1612768 retry.go:31] will retry after 1.059257015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:49:58.586607 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:58.586643 1612768 retry.go:31] will retry after 672.725089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:49:58.586698 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:58.586708 1612768 retry.go:31] will retry after 959.372089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:59.260242 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0327 22:49:59.360965 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:59.361050 1612768 retry.go:31] will retry after 984.748559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:59.390165 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0327 22:49:59.499619 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0327 22:49:59.508523 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:59.508601 1612768 retry.go:31] will retry after 1.240226269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:59.546841 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0327 22:49:59.646911 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:59.647035 1612768 retry.go:31] will retry after 1.050177971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:49:59.703726 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:49:59.703757 1612768 retry.go:31] will retry after 1.448120654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:00.174870 1612768 node_ready.go:53] error getting node "old-k8s-version-195171": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-195171": dial tcp 192.168.76.2:8443: connect: connection refused
	I0327 22:50:00.348687 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0327 22:50:00.549173 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:00.549282 1612768 retry.go:31] will retry after 1.537537893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:00.697978 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 22:50:00.749529 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0327 22:50:00.860188 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:00.860227 1612768 retry.go:31] will retry after 2.702017948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:50:00.893366 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:00.893405 1612768 retry.go:31] will retry after 957.715068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:01.152042 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0327 22:50:01.258334 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:01.258431 1612768 retry.go:31] will retry after 2.327490722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:01.851925 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0327 22:50:02.015768 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:02.015806 1612768 retry.go:31] will retry after 2.829200006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:02.087187 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0327 22:50:02.210090 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:02.210122 1612768 retry.go:31] will retry after 1.908630941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:02.672066 1612768 node_ready.go:53] error getting node "old-k8s-version-195171": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-195171": dial tcp 192.168.76.2:8443: connect: connection refused
	I0327 22:50:03.562537 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 22:50:03.586941 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0327 22:50:03.853904 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:03.853939 1612768 retry.go:31] will retry after 3.227253492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0327 22:50:03.914939 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:03.914974 1612768 retry.go:31] will retry after 3.082347351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:04.119402 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0327 22:50:04.228588 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:04.228623 1612768 retry.go:31] will retry after 6.001104688s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:04.672591 1612768 node_ready.go:53] error getting node "old-k8s-version-195171": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-195171": dial tcp 192.168.76.2:8443: connect: connection refused
	I0327 22:50:04.846012 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0327 22:50:05.020997 1612768 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:05.021027 1612768 retry.go:31] will retry after 5.992259666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0327 22:50:06.997787 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 22:50:07.082220 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 22:50:10.230352 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0327 22:50:11.014244 1612768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0327 22:50:15.394490 1612768 node_ready.go:49] node "old-k8s-version-195171" has status "Ready":"True"
	I0327 22:50:15.394526 1612768 node_ready.go:38] duration metric: took 19.223252561s for node "old-k8s-version-195171" to be "Ready" ...
	I0327 22:50:15.394540 1612768 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 22:50:15.615371 1612768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-fnmsr" in "kube-system" namespace to be "Ready" ...
	I0327 22:50:15.810276 1612768 pod_ready.go:92] pod "coredns-74ff55c5b-fnmsr" in "kube-system" namespace has status "Ready":"True"
	I0327 22:50:15.810307 1612768 pod_ready.go:81] duration metric: took 194.898037ms for pod "coredns-74ff55c5b-fnmsr" in "kube-system" namespace to be "Ready" ...
	I0327 22:50:15.810321 1612768 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-195171" in "kube-system" namespace to be "Ready" ...
	I0327 22:50:16.796229 1612768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.798403285s)
	I0327 22:50:16.801562 1612768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.719289717s)
	I0327 22:50:16.801605 1612768 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-195171"
	I0327 22:50:16.939427 1612768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.709025784s)
	I0327 22:50:16.945476 1612768 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-195171 addons enable metrics-server
	
	I0327 22:50:16.939648 1612768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.925365146s)
	I0327 22:50:16.956466 1612768 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0327 22:50:16.958606 1612768 addons.go:505] duration metric: took 21.075303232s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0327 22:50:17.817874 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:20.318488 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:22.815803 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:24.816667 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:26.817026 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:29.317500 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:31.816392 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:33.816606 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:35.816835 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:37.817520 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:39.817642 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:42.335583 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:44.816085 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:46.816655 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:48.816759 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:51.317709 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:53.816984 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:56.316640 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:50:58.317848 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:00.345920 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:02.816437 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:05.373414 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:07.817271 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:09.823594 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:12.316910 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:14.323803 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:16.818253 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:19.317333 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:21.816426 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:24.316419 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:26.317496 1612768 pod_ready.go:102] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:28.316819 1612768 pod_ready.go:92] pod "etcd-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"True"
	I0327 22:51:28.316847 1612768 pod_ready.go:81] duration metric: took 1m12.506506328s for pod "etcd-old-k8s-version-195171" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:28.316862 1612768 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-195171" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:28.322308 1612768 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"True"
	I0327 22:51:28.322341 1612768 pod_ready.go:81] duration metric: took 5.470032ms for pod "kube-apiserver-old-k8s-version-195171" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:28.322354 1612768 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:30.331837 1612768 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:32.828904 1612768 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:34.830337 1612768 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:37.332395 1612768 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:39.828992 1612768 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:42.329214 1612768 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:44.331029 1612768 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"True"
	I0327 22:51:44.331118 1612768 pod_ready.go:81] duration metric: took 16.008753807s for pod "kube-controller-manager-old-k8s-version-195171" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:44.331145 1612768 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9vmnf" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:44.336285 1612768 pod_ready.go:92] pod "kube-proxy-9vmnf" in "kube-system" namespace has status "Ready":"True"
	I0327 22:51:44.336312 1612768 pod_ready.go:81] duration metric: took 5.132425ms for pod "kube-proxy-9vmnf" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:44.336323 1612768 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-195171" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:44.341505 1612768 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-195171" in "kube-system" namespace has status "Ready":"True"
	I0327 22:51:44.341535 1612768 pod_ready.go:81] duration metric: took 5.204128ms for pod "kube-scheduler-old-k8s-version-195171" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:44.341548 1612768 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace to be "Ready" ...
	I0327 22:51:46.349117 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:48.349306 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:50.849527 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:52.851630 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:55.347453 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:57.347676 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:51:59.349755 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:01.850796 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:04.348823 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:06.350216 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:08.353317 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:10.853286 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:13.347710 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:15.350032 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:17.848287 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:19.854793 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:22.357934 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:24.850945 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:27.350828 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:29.848743 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:32.348763 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:34.848067 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:36.852389 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:39.347736 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:41.348843 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:43.848028 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:45.848530 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:48.348689 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:50.847482 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:52.847657 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:54.853297 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:57.348100 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:52:59.350584 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:01.851544 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:04.348532 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:06.376043 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:08.848359 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:11.348381 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:13.847789 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:15.849018 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:17.852554 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:20.348892 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:22.848608 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:25.347868 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:27.348997 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:29.349104 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:31.848115 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:33.848548 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:36.352370 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:38.854496 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:41.348303 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:43.349159 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:45.358798 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:47.853885 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:50.348923 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:52.847277 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:54.847363 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:56.869607 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:53:59.348545 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:01.848507 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:03.848716 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:05.851854 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:08.347975 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:10.348139 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:12.348518 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:14.349215 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:16.850371 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:19.348761 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:21.848178 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:23.848540 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:25.849578 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:28.347300 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:30.348227 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:32.348499 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:34.848413 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:37.350207 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:39.350833 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:41.849163 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:44.348376 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:46.349252 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:48.349901 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:50.847814 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:52.848379 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:54.858566 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:57.369712 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:54:59.848875 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:02.348752 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:04.847997 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:06.848055 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:08.848548 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:11.347568 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:13.350754 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:15.849263 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:17.851631 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:20.349001 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:22.848144 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:25.348730 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:27.847978 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:30.348246 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:32.350973 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:34.847930 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:37.346894 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:39.348109 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:41.848316 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:43.848469 1612768 pod_ready.go:102] pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace has status "Ready":"False"
	I0327 22:55:44.352250 1612768 pod_ready.go:81] duration metric: took 4m0.010686396s for pod "metrics-server-9975d5f86-6qllz" in "kube-system" namespace to be "Ready" ...
	E0327 22:55:44.352285 1612768 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0327 22:55:44.352296 1612768 pod_ready.go:38] duration metric: took 5m28.957744434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 22:55:44.352318 1612768 api_server.go:52] waiting for apiserver process to appear ...
	I0327 22:55:44.352360 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0327 22:55:44.352427 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0327 22:55:44.426230 1612768 cri.go:89] found id: "2e1958328a33b420b6569e61d7a6b2e67e8002d40f7a1d0d7a4cf58319d03267"
	I0327 22:55:44.426257 1612768 cri.go:89] found id: "6d7bb312fba1e741ab92c5eb9a9f90d53b215e3c7a64f3a6790bb38e841c76ee"
	I0327 22:55:44.426263 1612768 cri.go:89] found id: ""
	I0327 22:55:44.426271 1612768 logs.go:276] 2 containers: [2e1958328a33b420b6569e61d7a6b2e67e8002d40f7a1d0d7a4cf58319d03267 6d7bb312fba1e741ab92c5eb9a9f90d53b215e3c7a64f3a6790bb38e841c76ee]
	I0327 22:55:44.426376 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.431225 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.435815 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0327 22:55:44.435895 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0327 22:55:44.494367 1612768 cri.go:89] found id: "fc0a1b53d0f819bb09ef39e6011a38cf8d57bf010bdf13c65f8d914d850e70b2"
	I0327 22:55:44.494395 1612768 cri.go:89] found id: "b6f637dff335b0a27399ab46defc429894613ed31ac5f4467f5d5fbb1a01af46"
	I0327 22:55:44.494400 1612768 cri.go:89] found id: ""
	I0327 22:55:44.494446 1612768 logs.go:276] 2 containers: [fc0a1b53d0f819bb09ef39e6011a38cf8d57bf010bdf13c65f8d914d850e70b2 b6f637dff335b0a27399ab46defc429894613ed31ac5f4467f5d5fbb1a01af46]
	I0327 22:55:44.494531 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.499322 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.504535 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0327 22:55:44.504697 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0327 22:55:44.577903 1612768 cri.go:89] found id: "99bf898f9c9cb67d469fa99cefeb6d0e14b65dced65891b8706534caba3c6c80"
	I0327 22:55:44.577927 1612768 cri.go:89] found id: "84593e4d98d96f90ffb2680fe57238a5c4cf80bf817dae59479a0c7a089ad239"
	I0327 22:55:44.577933 1612768 cri.go:89] found id: ""
	I0327 22:55:44.577940 1612768 logs.go:276] 2 containers: [99bf898f9c9cb67d469fa99cefeb6d0e14b65dced65891b8706534caba3c6c80 84593e4d98d96f90ffb2680fe57238a5c4cf80bf817dae59479a0c7a089ad239]
	I0327 22:55:44.578000 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.583280 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.589581 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0327 22:55:44.589683 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0327 22:55:44.666379 1612768 cri.go:89] found id: "756e890221cd204dc8073b60566f5e086ab44aa426abefcfc1ae8de0695ffe2a"
	I0327 22:55:44.666450 1612768 cri.go:89] found id: "ad81f139258693b946d350f892a0a317ad916974d01cf75227e4cbfb1120ad94"
	I0327 22:55:44.666462 1612768 cri.go:89] found id: ""
	I0327 22:55:44.666470 1612768 logs.go:276] 2 containers: [756e890221cd204dc8073b60566f5e086ab44aa426abefcfc1ae8de0695ffe2a ad81f139258693b946d350f892a0a317ad916974d01cf75227e4cbfb1120ad94]
	I0327 22:55:44.666547 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.671689 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.676687 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0327 22:55:44.676776 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0327 22:55:44.734497 1612768 cri.go:89] found id: "c476de2402c209bf0de9037eb9f3c324981378d17d4be3cc70e7610117f74d75"
	I0327 22:55:44.734521 1612768 cri.go:89] found id: "3242e57c5b9958b451fec095ec4db56864f938b6b05a2d40b3e313e85436386f"
	I0327 22:55:44.734530 1612768 cri.go:89] found id: ""
	I0327 22:55:44.734549 1612768 logs.go:276] 2 containers: [c476de2402c209bf0de9037eb9f3c324981378d17d4be3cc70e7610117f74d75 3242e57c5b9958b451fec095ec4db56864f938b6b05a2d40b3e313e85436386f]
	I0327 22:55:44.734613 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.739575 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.744895 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0327 22:55:44.744981 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0327 22:55:44.806925 1612768 cri.go:89] found id: "f569ef0c355df6fd4fc804073ad9b3d47dba586babdf6903d408aafbd07b1049"
	I0327 22:55:44.807003 1612768 cri.go:89] found id: "99491c7efa6ca74287288d2ca9dd78b3a2a5c1d801b69057db17e9237ab0e681"
	I0327 22:55:44.807023 1612768 cri.go:89] found id: ""
	I0327 22:55:44.807051 1612768 logs.go:276] 2 containers: [f569ef0c355df6fd4fc804073ad9b3d47dba586babdf6903d408aafbd07b1049 99491c7efa6ca74287288d2ca9dd78b3a2a5c1d801b69057db17e9237ab0e681]
	I0327 22:55:44.807158 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.811860 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.818987 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0327 22:55:44.819086 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0327 22:55:44.881707 1612768 cri.go:89] found id: "3de67807d1a06a97c2e60ef29c464d561186bc98db3ba0d1f9fa213c5c80c65d"
	I0327 22:55:44.881736 1612768 cri.go:89] found id: "d0b987e0a8e4f9e88a3d46d048c3c86ea1dcbd737d51ec21e43dd7730e08a76a"
	I0327 22:55:44.881741 1612768 cri.go:89] found id: ""
	I0327 22:55:44.881748 1612768 logs.go:276] 2 containers: [3de67807d1a06a97c2e60ef29c464d561186bc98db3ba0d1f9fa213c5c80c65d d0b987e0a8e4f9e88a3d46d048c3c86ea1dcbd737d51ec21e43dd7730e08a76a]
	I0327 22:55:44.881820 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.886194 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.890732 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0327 22:55:44.890819 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0327 22:55:44.960177 1612768 cri.go:89] found id: "d801b298852bd17bcc4648b83a9cc84a9cca57433e0871456478a81642159943"
	I0327 22:55:44.960199 1612768 cri.go:89] found id: "a6f0eaa04a1629cd4415df54b145c25c5c9c19b03373308fe9e59e05c27d176f"
	I0327 22:55:44.960208 1612768 cri.go:89] found id: ""
	I0327 22:55:44.960256 1612768 logs.go:276] 2 containers: [d801b298852bd17bcc4648b83a9cc84a9cca57433e0871456478a81642159943 a6f0eaa04a1629cd4415df54b145c25c5c9c19b03373308fe9e59e05c27d176f]
	I0327 22:55:44.960343 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.967476 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:44.975164 1612768 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0327 22:55:44.975311 1612768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0327 22:55:45.119288 1612768 cri.go:89] found id: "727ba30d5a28abb5c60d75f4b8540f5351812fd2401dc97a994eb0bb694e57fe"
	I0327 22:55:45.119403 1612768 cri.go:89] found id: ""
	I0327 22:55:45.119432 1612768 logs.go:276] 1 containers: [727ba30d5a28abb5c60d75f4b8540f5351812fd2401dc97a994eb0bb694e57fe]
	I0327 22:55:45.119542 1612768 ssh_runner.go:195] Run: which crictl
	I0327 22:55:45.124384 1612768 logs.go:123] Gathering logs for describe nodes ...
	I0327 22:55:45.124417 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 22:55:45.380270 1612768 logs.go:123] Gathering logs for coredns [99bf898f9c9cb67d469fa99cefeb6d0e14b65dced65891b8706534caba3c6c80] ...
	I0327 22:55:45.380309 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99bf898f9c9cb67d469fa99cefeb6d0e14b65dced65891b8706534caba3c6c80"
	I0327 22:55:45.444244 1612768 logs.go:123] Gathering logs for coredns [84593e4d98d96f90ffb2680fe57238a5c4cf80bf817dae59479a0c7a089ad239] ...
	I0327 22:55:45.444274 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84593e4d98d96f90ffb2680fe57238a5c4cf80bf817dae59479a0c7a089ad239"
	I0327 22:55:45.500850 1612768 logs.go:123] Gathering logs for kindnet [3de67807d1a06a97c2e60ef29c464d561186bc98db3ba0d1f9fa213c5c80c65d] ...
	I0327 22:55:45.500882 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3de67807d1a06a97c2e60ef29c464d561186bc98db3ba0d1f9fa213c5c80c65d"
	I0327 22:55:45.595531 1612768 logs.go:123] Gathering logs for kubernetes-dashboard [727ba30d5a28abb5c60d75f4b8540f5351812fd2401dc97a994eb0bb694e57fe] ...
	I0327 22:55:45.595621 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 727ba30d5a28abb5c60d75f4b8540f5351812fd2401dc97a994eb0bb694e57fe"
	I0327 22:55:45.654176 1612768 logs.go:123] Gathering logs for container status ...
	I0327 22:55:45.654204 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 22:55:45.727076 1612768 logs.go:123] Gathering logs for kube-apiserver [6d7bb312fba1e741ab92c5eb9a9f90d53b215e3c7a64f3a6790bb38e841c76ee] ...
	I0327 22:55:45.727154 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d7bb312fba1e741ab92c5eb9a9f90d53b215e3c7a64f3a6790bb38e841c76ee"
	I0327 22:55:45.791777 1612768 logs.go:123] Gathering logs for kube-proxy [3242e57c5b9958b451fec095ec4db56864f938b6b05a2d40b3e313e85436386f] ...
	I0327 22:55:45.791815 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3242e57c5b9958b451fec095ec4db56864f938b6b05a2d40b3e313e85436386f"
	I0327 22:55:45.879486 1612768 logs.go:123] Gathering logs for containerd ...
	I0327 22:55:45.879520 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0327 22:55:45.953744 1612768 logs.go:123] Gathering logs for dmesg ...
	I0327 22:55:45.953781 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 22:55:45.988715 1612768 logs.go:123] Gathering logs for etcd [fc0a1b53d0f819bb09ef39e6011a38cf8d57bf010bdf13c65f8d914d850e70b2] ...
	I0327 22:55:45.988795 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0a1b53d0f819bb09ef39e6011a38cf8d57bf010bdf13c65f8d914d850e70b2"
	I0327 22:55:46.045828 1612768 logs.go:123] Gathering logs for kube-scheduler [ad81f139258693b946d350f892a0a317ad916974d01cf75227e4cbfb1120ad94] ...
	I0327 22:55:46.045861 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad81f139258693b946d350f892a0a317ad916974d01cf75227e4cbfb1120ad94"
	I0327 22:55:46.102849 1612768 logs.go:123] Gathering logs for kube-proxy [c476de2402c209bf0de9037eb9f3c324981378d17d4be3cc70e7610117f74d75] ...
	I0327 22:55:46.102894 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c476de2402c209bf0de9037eb9f3c324981378d17d4be3cc70e7610117f74d75"
	I0327 22:55:46.154914 1612768 logs.go:123] Gathering logs for kube-controller-manager [f569ef0c355df6fd4fc804073ad9b3d47dba586babdf6903d408aafbd07b1049] ...
	I0327 22:55:46.154954 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f569ef0c355df6fd4fc804073ad9b3d47dba586babdf6903d408aafbd07b1049"
	I0327 22:55:46.269560 1612768 logs.go:123] Gathering logs for storage-provisioner [a6f0eaa04a1629cd4415df54b145c25c5c9c19b03373308fe9e59e05c27d176f] ...
	I0327 22:55:46.269596 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f0eaa04a1629cd4415df54b145c25c5c9c19b03373308fe9e59e05c27d176f"
	I0327 22:55:46.333473 1612768 logs.go:123] Gathering logs for storage-provisioner [d801b298852bd17bcc4648b83a9cc84a9cca57433e0871456478a81642159943] ...
	I0327 22:55:46.333499 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d801b298852bd17bcc4648b83a9cc84a9cca57433e0871456478a81642159943"
	I0327 22:55:46.409310 1612768 logs.go:123] Gathering logs for kubelet ...
	I0327 22:55:46.409339 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 22:55:46.489373 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:15 old-k8s-version-195171 kubelet[666]: E0327 22:50:15.404333     666 reflector.go:138] object-"kube-system"/"coredns-token-bwlwx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-bwlwx" is forbidden: User "system:node:old-k8s-version-195171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-195171' and this object
	W0327 22:55:46.489611 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:15 old-k8s-version-195171 kubelet[666]: E0327 22:50:15.405380     666 reflector.go:138] object-"kube-system"/"kindnet-token-lpf7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-lpf7x" is forbidden: User "system:node:old-k8s-version-195171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-195171' and this object
	W0327 22:55:46.489834 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:15 old-k8s-version-195171 kubelet[666]: E0327 22:50:15.405458     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-n7b8t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-n7b8t" is forbidden: User "system:node:old-k8s-version-195171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-195171' and this object
	W0327 22:55:46.490044 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:15 old-k8s-version-195171 kubelet[666]: E0327 22:50:15.405531     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-195171" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-195171' and this object
	W0327 22:55:46.490271 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:15 old-k8s-version-195171 kubelet[666]: E0327 22:50:15.405600     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-zpg8b": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-zpg8b" is forbidden: User "system:node:old-k8s-version-195171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-195171' and this object
	W0327 22:55:46.490499 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:15 old-k8s-version-195171 kubelet[666]: E0327 22:50:15.405656     666 reflector.go:138] object-"default"/"default-token-pxm7l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pxm7l" is forbidden: User "system:node:old-k8s-version-195171" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-195171' and this object
	W0327 22:55:46.490701 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:15 old-k8s-version-195171 kubelet[666]: E0327 22:50:15.405712     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-195171" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-195171' and this object
	W0327 22:55:46.490927 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:15 old-k8s-version-195171 kubelet[666]: E0327 22:50:15.405789     666 reflector.go:138] object-"kube-system"/"metrics-server-token-d9nwk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-d9nwk" is forbidden: User "system:node:old-k8s-version-195171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-195171' and this object
	W0327 22:55:46.500297 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:19 old-k8s-version-195171 kubelet[666]: E0327 22:50:19.306627     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0327 22:55:46.500511 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:20 old-k8s-version-195171 kubelet[666]: E0327 22:50:20.226760     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.504820 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:41 old-k8s-version-195171 kubelet[666]: E0327 22:50:41.249911     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0327 22:55:46.505414 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:43 old-k8s-version-195171 kubelet[666]: E0327 22:50:43.308472     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.506088 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:44 old-k8s-version-195171 kubelet[666]: E0327 22:50:44.322533     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.506659 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:48 old-k8s-version-195171 kubelet[666]: E0327 22:50:48.333272     666 pod_workers.go:191] Error syncing pod 2445befc-65ff-41e5-af28-4d7b7ea169c3 ("storage-provisioner_kube-system(2445befc-65ff-41e5-af28-4d7b7ea169c3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2445befc-65ff-41e5-af28-4d7b7ea169c3)"
	W0327 22:55:46.507000 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:48 old-k8s-version-195171 kubelet[666]: E0327 22:50:48.418495     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.507187 1612768 logs.go:138] Found kubelet problem: Mar 27 22:50:51 old-k8s-version-195171 kubelet[666]: E0327 22:50:51.957981     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.508244 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:01 old-k8s-version-195171 kubelet[666]: E0327 22:51:01.375259     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.510741 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:06 old-k8s-version-195171 kubelet[666]: E0327 22:51:06.965558     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0327 22:55:46.511073 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:08 old-k8s-version-195171 kubelet[666]: E0327 22:51:08.418074     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.511261 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:18 old-k8s-version-195171 kubelet[666]: E0327 22:51:18.963897     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.511592 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:20 old-k8s-version-195171 kubelet[666]: E0327 22:51:20.957044     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.511779 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:30 old-k8s-version-195171 kubelet[666]: E0327 22:51:30.957866     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.512397 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:33 old-k8s-version-195171 kubelet[666]: E0327 22:51:33.446851     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.512772 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:38 old-k8s-version-195171 kubelet[666]: E0327 22:51:38.418634     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.512966 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:42 old-k8s-version-195171 kubelet[666]: E0327 22:51:42.957649     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.513296 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:49 old-k8s-version-195171 kubelet[666]: E0327 22:51:49.960934     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.515772 1612768 logs.go:138] Found kubelet problem: Mar 27 22:51:55 old-k8s-version-195171 kubelet[666]: E0327 22:51:55.975022     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0327 22:55:46.516102 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:04 old-k8s-version-195171 kubelet[666]: E0327 22:52:04.957453     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.516299 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:09 old-k8s-version-195171 kubelet[666]: E0327 22:52:09.957828     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.516888 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:18 old-k8s-version-195171 kubelet[666]: E0327 22:52:18.554657     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.517073 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:23 old-k8s-version-195171 kubelet[666]: E0327 22:52:23.958378     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.517401 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:28 old-k8s-version-195171 kubelet[666]: E0327 22:52:28.418084     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.517589 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:38 old-k8s-version-195171 kubelet[666]: E0327 22:52:38.957580     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.517923 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:43 old-k8s-version-195171 kubelet[666]: E0327 22:52:43.958020     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.518109 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:50 old-k8s-version-195171 kubelet[666]: E0327 22:52:50.957744     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.518513 1612768 logs.go:138] Found kubelet problem: Mar 27 22:52:54 old-k8s-version-195171 kubelet[666]: E0327 22:52:54.957017     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.518852 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:05 old-k8s-version-195171 kubelet[666]: E0327 22:53:05.957594     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.519039 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:05 old-k8s-version-195171 kubelet[666]: E0327 22:53:05.957909     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.519369 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:16 old-k8s-version-195171 kubelet[666]: E0327 22:53:16.957182     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.521814 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:17 old-k8s-version-195171 kubelet[666]: E0327 22:53:17.984335     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0327 22:55:46.522145 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:30 old-k8s-version-195171 kubelet[666]: E0327 22:53:30.957036     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.522333 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:31 old-k8s-version-195171 kubelet[666]: E0327 22:53:31.957715     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.525239 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:42 old-k8s-version-195171 kubelet[666]: E0327 22:53:42.957458     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.525868 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:46 old-k8s-version-195171 kubelet[666]: E0327 22:53:46.747489     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.526202 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:48 old-k8s-version-195171 kubelet[666]: E0327 22:53:48.418116     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.526389 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:54 old-k8s-version-195171 kubelet[666]: E0327 22:53:54.957530     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.530748 1612768 logs.go:138] Found kubelet problem: Mar 27 22:53:58 old-k8s-version-195171 kubelet[666]: E0327 22:53:58.956997     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.530952 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:08 old-k8s-version-195171 kubelet[666]: E0327 22:54:08.957249     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.531282 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:09 old-k8s-version-195171 kubelet[666]: E0327 22:54:09.957348     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.531467 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:21 old-k8s-version-195171 kubelet[666]: E0327 22:54:21.957462     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.531795 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:23 old-k8s-version-195171 kubelet[666]: E0327 22:54:23.957514     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.532148 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:34 old-k8s-version-195171 kubelet[666]: E0327 22:54:34.957067     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.532335 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:34 old-k8s-version-195171 kubelet[666]: E0327 22:54:34.958263     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.532521 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:47 old-k8s-version-195171 kubelet[666]: E0327 22:54:47.960877     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.532851 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:49 old-k8s-version-195171 kubelet[666]: E0327 22:54:49.957053     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.533036 1612768 logs.go:138] Found kubelet problem: Mar 27 22:54:58 old-k8s-version-195171 kubelet[666]: E0327 22:54:58.957440     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.533365 1612768 logs.go:138] Found kubelet problem: Mar 27 22:55:03 old-k8s-version-195171 kubelet[666]: E0327 22:55:03.963334     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.533550 1612768 logs.go:138] Found kubelet problem: Mar 27 22:55:10 old-k8s-version-195171 kubelet[666]: E0327 22:55:10.957711     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.533886 1612768 logs.go:138] Found kubelet problem: Mar 27 22:55:16 old-k8s-version-195171 kubelet[666]: E0327 22:55:16.957191     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.534071 1612768 logs.go:138] Found kubelet problem: Mar 27 22:55:25 old-k8s-version-195171 kubelet[666]: E0327 22:55:25.961185     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.534416 1612768 logs.go:138] Found kubelet problem: Mar 27 22:55:28 old-k8s-version-195171 kubelet[666]: E0327 22:55:28.964031     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.534606 1612768 logs.go:138] Found kubelet problem: Mar 27 22:55:38 old-k8s-version-195171 kubelet[666]: E0327 22:55:38.957450     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.534934 1612768 logs.go:138] Found kubelet problem: Mar 27 22:55:43 old-k8s-version-195171 kubelet[666]: E0327 22:55:43.957605     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	I0327 22:55:46.534947 1612768 logs.go:123] Gathering logs for kube-apiserver [2e1958328a33b420b6569e61d7a6b2e67e8002d40f7a1d0d7a4cf58319d03267] ...
	I0327 22:55:46.534960 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e1958328a33b420b6569e61d7a6b2e67e8002d40f7a1d0d7a4cf58319d03267"
	I0327 22:55:46.607847 1612768 logs.go:123] Gathering logs for etcd [b6f637dff335b0a27399ab46defc429894613ed31ac5f4467f5d5fbb1a01af46] ...
	I0327 22:55:46.607885 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f637dff335b0a27399ab46defc429894613ed31ac5f4467f5d5fbb1a01af46"
	I0327 22:55:46.676669 1612768 logs.go:123] Gathering logs for kube-scheduler [756e890221cd204dc8073b60566f5e086ab44aa426abefcfc1ae8de0695ffe2a] ...
	I0327 22:55:46.676701 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 756e890221cd204dc8073b60566f5e086ab44aa426abefcfc1ae8de0695ffe2a"
	I0327 22:55:46.732742 1612768 logs.go:123] Gathering logs for kube-controller-manager [99491c7efa6ca74287288d2ca9dd78b3a2a5c1d801b69057db17e9237ab0e681] ...
	I0327 22:55:46.732771 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99491c7efa6ca74287288d2ca9dd78b3a2a5c1d801b69057db17e9237ab0e681"
	I0327 22:55:46.838358 1612768 logs.go:123] Gathering logs for kindnet [d0b987e0a8e4f9e88a3d46d048c3c86ea1dcbd737d51ec21e43dd7730e08a76a] ...
	I0327 22:55:46.838457 1612768 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0b987e0a8e4f9e88a3d46d048c3c86ea1dcbd737d51ec21e43dd7730e08a76a"
	I0327 22:55:46.901709 1612768 out.go:304] Setting ErrFile to fd 2...
	I0327 22:55:46.901737 1612768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 22:55:46.901788 1612768 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 22:55:46.901796 1612768 out.go:239]   Mar 27 22:55:16 old-k8s-version-195171 kubelet[666]: E0327 22:55:16.957191     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	  Mar 27 22:55:16 old-k8s-version-195171 kubelet[666]: E0327 22:55:16.957191     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.901804 1612768 out.go:239]   Mar 27 22:55:25 old-k8s-version-195171 kubelet[666]: E0327 22:55:25.961185     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 27 22:55:25 old-k8s-version-195171 kubelet[666]: E0327 22:55:25.961185     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.901811 1612768 out.go:239]   Mar 27 22:55:28 old-k8s-version-195171 kubelet[666]: E0327 22:55:28.964031     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	  Mar 27 22:55:28 old-k8s-version-195171 kubelet[666]: E0327 22:55:28.964031     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	W0327 22:55:46.901817 1612768 out.go:239]   Mar 27 22:55:38 old-k8s-version-195171 kubelet[666]: E0327 22:55:38.957450     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 27 22:55:38 old-k8s-version-195171 kubelet[666]: E0327 22:55:38.957450     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0327 22:55:46.901829 1612768 out.go:239]   Mar 27 22:55:43 old-k8s-version-195171 kubelet[666]: E0327 22:55:43.957605     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	  Mar 27 22:55:43 old-k8s-version-195171 kubelet[666]: E0327 22:55:43.957605     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	I0327 22:55:46.901835 1612768 out.go:304] Setting ErrFile to fd 2...
	I0327 22:55:46.901841 1612768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:55:56.903213 1612768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 22:55:56.916039 1612768 api_server.go:72] duration metric: took 6m1.033073946s to wait for apiserver process to appear ...
	I0327 22:55:56.916062 1612768 api_server.go:88] waiting for apiserver healthz status ...
	I0327 22:55:56.928667 1612768 out.go:177] 
	W0327 22:55:56.940754 1612768 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0327 22:55:56.940779 1612768 out.go:239] * 
	* 
	W0327 22:55:56.942316 1612768 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 22:55:56.945457 1612768 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-195171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-195171
helpers_test.go:235: (dbg) docker inspect old-k8s-version-195171:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44d94f6de3730b675d78a00bc0cef4a196dac70db8de0f2bde7bce6ef9896236",
	        "Created": "2024-03-27T22:46:57.810013843Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1612976,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-27T22:49:48.253523649Z",
	            "FinishedAt": "2024-03-27T22:49:46.257504842Z"
	        },
	        "Image": "sha256:f9b5358e8c18dbe49e632154cad75e0968b2e103f621caff2c3ed996f4155861",
	        "ResolvConfPath": "/var/lib/docker/containers/44d94f6de3730b675d78a00bc0cef4a196dac70db8de0f2bde7bce6ef9896236/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44d94f6de3730b675d78a00bc0cef4a196dac70db8de0f2bde7bce6ef9896236/hostname",
	        "HostsPath": "/var/lib/docker/containers/44d94f6de3730b675d78a00bc0cef4a196dac70db8de0f2bde7bce6ef9896236/hosts",
	        "LogPath": "/var/lib/docker/containers/44d94f6de3730b675d78a00bc0cef4a196dac70db8de0f2bde7bce6ef9896236/44d94f6de3730b675d78a00bc0cef4a196dac70db8de0f2bde7bce6ef9896236-json.log",
	        "Name": "/old-k8s-version-195171",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-195171:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-195171",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5ad47df5107309a41c0ff4cf6baad7acce820586e37a671f590994a3fc1b437-init/diff:/var/lib/docker/overlay2/9aff79c4d350679b403430af5e9f1b0f6423798443e2d342556eedd63c4805d0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5ad47df5107309a41c0ff4cf6baad7acce820586e37a671f590994a3fc1b437/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5ad47df5107309a41c0ff4cf6baad7acce820586e37a671f590994a3fc1b437/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5ad47df5107309a41c0ff4cf6baad7acce820586e37a671f590994a3fc1b437/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-195171",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-195171/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-195171",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-195171",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-195171",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "062e8667fee5e531f416d7f5f31dc780e67f87a8395ad490c179a46c01ccc8c6",
	            "SandboxKey": "/var/run/docker/netns/062e8667fee5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34594"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34591"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34593"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34592"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-195171": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "315196e6af8aee0d6666dad71c38b78ec6eba8a8693cf9f490a17d05f6d1fe66",
	                    "EndpointID": "d7ac6601831d1f0dc14735433e892ea8f4776e24363ab3c8f644b0c3a0a32fa2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-195171",
	                        "44d94f6de373"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-195171 -n old-k8s-version-195171
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-195171 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-195171 logs -n 25: (3.60045338s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-348519                              | cert-expiration-348519   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:45 UTC | 27 Mar 24 22:46 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	| ssh     | force-systemd-env-151767                               | force-systemd-env-151767 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:46 UTC | 27 Mar 24 22:46 UTC |
	|         | ssh cat                                                |                          |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |                |                     |                     |
	| delete  | -p force-systemd-env-151767                            | force-systemd-env-151767 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:46 UTC | 27 Mar 24 22:46 UTC |
	| start   | -p cert-options-876699                                 | cert-options-876699      | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:46 UTC | 27 Mar 24 22:46 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |                |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |                |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |                |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	| ssh     | cert-options-876699 ssh                                | cert-options-876699      | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:46 UTC | 27 Mar 24 22:46 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |                |                     |                     |
	| ssh     | -p cert-options-876699 -- sudo                         | cert-options-876699      | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:46 UTC | 27 Mar 24 22:46 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |                |                     |                     |
	| delete  | -p cert-options-876699                                 | cert-options-876699      | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:46 UTC | 27 Mar 24 22:46 UTC |
	| start   | -p old-k8s-version-195171                              | old-k8s-version-195171   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:46 UTC | 27 Mar 24 22:49 UTC |
	|         | --memory=2200                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --kvm-network=default                                  |                          |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |                |                     |                     |
	|         | --keep-context=false                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |                |                     |                     |
	| start   | -p cert-expiration-348519                              | cert-expiration-348519   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:49 UTC | 27 Mar 24 22:49 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	| delete  | -p cert-expiration-348519                              | cert-expiration-348519   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:49 UTC | 27 Mar 24 22:49 UTC |
	| start   | -p no-preload-463483 --memory=2200                     | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:49 UTC | 27 Mar 24 22:50 UTC |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --preload=false --driver=docker                        |                          |         |                |                     |                     |
	|         |  --container-runtime=containerd                        |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                          |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-195171        | old-k8s-version-195171   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:49 UTC | 27 Mar 24 22:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |                |                     |                     |
	| stop    | -p old-k8s-version-195171                              | old-k8s-version-195171   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:49 UTC | 27 Mar 24 22:49 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-195171             | old-k8s-version-195171   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:49 UTC | 27 Mar 24 22:49 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |                |                     |                     |
	| start   | -p old-k8s-version-195171                              | old-k8s-version-195171   | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:49 UTC |                     |
	|         | --memory=2200                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --kvm-network=default                                  |                          |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |                |                     |                     |
	|         | --keep-context=false                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-463483             | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:50 UTC | 27 Mar 24 22:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |                |                     |                     |
	| stop    | -p no-preload-463483                                   | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:50 UTC | 27 Mar 24 22:51 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-463483                  | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:51 UTC | 27 Mar 24 22:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |                |                     |                     |
	| start   | -p no-preload-463483 --memory=2200                     | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:51 UTC | 27 Mar 24 22:55 UTC |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --preload=false --driver=docker                        |                          |         |                |                     |                     |
	|         |  --container-runtime=containerd                        |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                          |         |                |                     |                     |
	| image   | no-preload-463483 image list                           | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:55 UTC | 27 Mar 24 22:55 UTC |
	|         | --format=json                                          |                          |         |                |                     |                     |
	| pause   | -p no-preload-463483                                   | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:55 UTC | 27 Mar 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |                |                     |                     |
	| unpause | -p no-preload-463483                                   | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:55 UTC | 27 Mar 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |                |                     |                     |
	| delete  | -p no-preload-463483                                   | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:55 UTC | 27 Mar 24 22:55 UTC |
	| delete  | -p no-preload-463483                                   | no-preload-463483        | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:55 UTC | 27 Mar 24 22:55 UTC |
	| start   | -p embed-certs-627479                                  | embed-certs-627479       | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:55 UTC |                     |
	|         | --memory=2200                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                          |         |                |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 22:55:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 22:55:50.896768 1623784 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:55:50.896907 1623784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:55:50.896918 1623784 out.go:304] Setting ErrFile to fd 2...
	I0327 22:55:50.896923 1623784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:55:50.897184 1623784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:55:50.897615 1623784 out.go:298] Setting JSON to false
	I0327 22:55:50.898738 1623784 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23889,"bootTime":1711556262,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:55:50.898830 1623784 start.go:139] virtualization:  
	I0327 22:55:50.902200 1623784 out.go:177] * [embed-certs-627479] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 22:55:50.906983 1623784 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 22:55:50.910116 1623784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:55:50.907078 1623784 notify.go:220] Checking for updates...
	I0327 22:55:50.915697 1623784 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:55:50.918532 1623784 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:55:50.921028 1623784 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 22:55:50.924363 1623784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 22:55:50.927228 1623784 config.go:182] Loaded profile config "old-k8s-version-195171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0327 22:55:50.927352 1623784 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:55:50.947155 1623784 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:55:50.947286 1623784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:55:51.022153 1623784 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 22:55:51.011352954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:55:51.022303 1623784 docker.go:295] overlay module found
	I0327 22:55:51.025110 1623784 out.go:177] * Using the docker driver based on user configuration
	I0327 22:55:51.027542 1623784 start.go:297] selected driver: docker
	I0327 22:55:51.027567 1623784 start.go:901] validating driver "docker" against <nil>
	I0327 22:55:51.027583 1623784 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 22:55:51.028250 1623784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:55:51.095426 1623784 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 22:55:51.083627812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:55:51.095598 1623784 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 22:55:51.095868 1623784 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 22:55:51.098464 1623784 out.go:177] * Using Docker driver with root privileges
	I0327 22:55:51.101249 1623784 cni.go:84] Creating CNI manager for ""
	I0327 22:55:51.101287 1623784 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:55:51.101337 1623784 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 22:55:51.101448 1623784 start.go:340] cluster config:
	{Name:embed-certs-627479 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-627479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contain
erd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:55:51.104641 1623784 out.go:177] * Starting "embed-certs-627479" primary control-plane node in "embed-certs-627479" cluster
	I0327 22:55:51.107341 1623784 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 22:55:51.110051 1623784 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 22:55:51.112809 1623784 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 22:55:51.112922 1623784 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0327 22:55:51.112955 1623784 cache.go:56] Caching tarball of preloaded images
	I0327 22:55:51.112922 1623784 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 22:55:51.113099 1623784 preload.go:173] Found /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 22:55:51.113111 1623784 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0327 22:55:51.113227 1623784 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/embed-certs-627479/config.json ...
	I0327 22:55:51.113249 1623784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/embed-certs-627479/config.json: {Name:mk4b0a8edefd503fd8fed228c390ed6ed06d6498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 22:55:51.128861 1623784 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon, skipping pull
	I0327 22:55:51.128889 1623784 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in daemon, skipping load
	I0327 22:55:51.128938 1623784 cache.go:194] Successfully downloaded all kic artifacts
	I0327 22:55:51.128972 1623784 start.go:360] acquireMachinesLock for embed-certs-627479: {Name:mk03af60fc4f732b3873e7e4c291b57ff7c53c7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 22:55:51.129118 1623784 start.go:364] duration metric: took 125.756µs to acquireMachinesLock for "embed-certs-627479"
	I0327 22:55:51.129149 1623784 start.go:93] Provisioning new machine with config: &{Name:embed-certs-627479 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-627479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 22:55:51.129243 1623784 start.go:125] createHost starting for "" (driver="docker")
	I0327 22:55:51.132383 1623784 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0327 22:55:51.132712 1623784 start.go:159] libmachine.API.Create for "embed-certs-627479" (driver="docker")
	I0327 22:55:51.132762 1623784 client.go:168] LocalClient.Create starting
	I0327 22:55:51.132864 1623784 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/ca.pem
	I0327 22:55:51.132913 1623784 main.go:141] libmachine: Decoding PEM data...
	I0327 22:55:51.132932 1623784 main.go:141] libmachine: Parsing certificate...
	I0327 22:55:51.133011 1623784 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17735-1410709/.minikube/certs/cert.pem
	I0327 22:55:51.133040 1623784 main.go:141] libmachine: Decoding PEM data...
	I0327 22:55:51.133051 1623784 main.go:141] libmachine: Parsing certificate...
	I0327 22:55:51.133505 1623784 cli_runner.go:164] Run: docker network inspect embed-certs-627479 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0327 22:55:51.148856 1623784 cli_runner.go:211] docker network inspect embed-certs-627479 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0327 22:55:51.148966 1623784 network_create.go:281] running [docker network inspect embed-certs-627479] to gather additional debugging logs...
	I0327 22:55:51.148988 1623784 cli_runner.go:164] Run: docker network inspect embed-certs-627479
	W0327 22:55:51.164767 1623784 cli_runner.go:211] docker network inspect embed-certs-627479 returned with exit code 1
	I0327 22:55:51.164805 1623784 network_create.go:284] error running [docker network inspect embed-certs-627479]: docker network inspect embed-certs-627479: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-627479 not found
	I0327 22:55:51.164819 1623784 network_create.go:286] output of [docker network inspect embed-certs-627479]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-627479 not found
	
	** /stderr **
	I0327 22:55:51.164922 1623784 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 22:55:51.180581 1623784 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c8d27b31f72b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:70:df:9b:59} reservation:<nil>}
	I0327 22:55:51.181090 1623784 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5fcedddcda34 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:16:12:c7:01} reservation:<nil>}
	I0327 22:55:51.181585 1623784 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2bd1e1f20876 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:5a:02:bb:4f} reservation:<nil>}
	I0327 22:55:51.182007 1623784 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-315196e6af8a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ca:f8:9b:4c} reservation:<nil>}
	I0327 22:55:51.182780 1623784 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025cf390}
	I0327 22:55:51.182831 1623784 network_create.go:124] attempt to create docker network embed-certs-627479 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0327 22:55:51.182935 1623784 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-627479 embed-certs-627479
	I0327 22:55:51.246070 1623784 network_create.go:108] docker network embed-certs-627479 192.168.85.0/24 created
	I0327 22:55:51.246103 1623784 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-627479" container
	I0327 22:55:51.246179 1623784 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0327 22:55:51.260154 1623784 cli_runner.go:164] Run: docker volume create embed-certs-627479 --label name.minikube.sigs.k8s.io=embed-certs-627479 --label created_by.minikube.sigs.k8s.io=true
	I0327 22:55:51.276160 1623784 oci.go:103] Successfully created a docker volume embed-certs-627479
	I0327 22:55:51.276242 1623784 cli_runner.go:164] Run: docker run --rm --name embed-certs-627479-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-627479 --entrypoint /usr/bin/test -v embed-certs-627479:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib
	I0327 22:55:51.864173 1623784 oci.go:107] Successfully prepared a docker volume embed-certs-627479
	I0327 22:55:51.864221 1623784 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 22:55:51.864243 1623784 kic.go:194] Starting extracting preloaded images to volume ...
	I0327 22:55:51.864338 1623784 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-627479:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0327 22:55:56.903213 1612768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 22:55:56.916039 1612768 api_server.go:72] duration metric: took 6m1.033073946s to wait for apiserver process to appear ...
	I0327 22:55:56.916062 1612768 api_server.go:88] waiting for apiserver healthz status ...
	I0327 22:55:56.928667 1612768 out.go:177] 
	W0327 22:55:56.940754 1612768 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0327 22:55:56.940779 1612768 out.go:239] * 
	W0327 22:55:56.942316 1612768 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 22:55:56.945457 1612768 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f907a721684f6       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   67752281548d6       dashboard-metrics-scraper-8d5bb5db8-pmtpv
	d801b298852bd       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   0be66265a71a7       storage-provisioner
	727ba30d5a28a       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   b7cda2e7c00e5       kubernetes-dashboard-cd95d586-cflzk
	c476de2402c20       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   2d7e6ad9e22ca       kube-proxy-9vmnf
	fb8b18bcde74e       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   04c6538122374       busybox
	99bf898f9c9cb       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   3777a7e2896c1       coredns-74ff55c5b-fnmsr
	a6f0eaa04a162       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   0be66265a71a7       storage-provisioner
	3de67807d1a06       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   1a5c458ebbe73       kindnet-tshsk
	fc0a1b53d0f81       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   77b13e6b16ed8       etcd-old-k8s-version-195171
	2e1958328a33b       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   27fc7ce992084       kube-apiserver-old-k8s-version-195171
	f569ef0c355df       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   c387c9ebc3345       kube-controller-manager-old-k8s-version-195171
	756e890221cd2       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   9516ffcf7e75f       kube-scheduler-old-k8s-version-195171
	7b17c3f6bc52b       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   38e367012feed       busybox
	84593e4d98d96       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   6d28d653444f8       coredns-74ff55c5b-fnmsr
	d0b987e0a8e4f       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   b9d7a3cf64c67       kindnet-tshsk
	3242e57c5b995       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   36ca7ab30532d       kube-proxy-9vmnf
	99491c7efa6ca       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   f7a11b48f09a6       kube-controller-manager-old-k8s-version-195171
	b6f637dff335b       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   47a8b3a07b93c       etcd-old-k8s-version-195171
	6d7bb312fba1e       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   0879a924cf5df       kube-apiserver-old-k8s-version-195171
	ad81f13925869       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   a5efad01c77b8       kube-scheduler-old-k8s-version-195171
	
	
	==> containerd <==
	Mar 27 22:51:55 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:51:55.971079541Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 27 22:51:55 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:51:55.974522149Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 27 22:52:17 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:17.959753889Z" level=info msg="CreateContainer within sandbox \"67752281548d6d0ec64b9f0477c95a34bae8f550c4fa66e9ce0cacc34a20002c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Mar 27 22:52:17 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:17.987616190Z" level=info msg="CreateContainer within sandbox \"67752281548d6d0ec64b9f0477c95a34bae8f550c4fa66e9ce0cacc34a20002c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"abd5f3ab2c376d54cf542752444174be3ec28827ab4ea5104e52aba33700d557\""
	Mar 27 22:52:17 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:17.988204540Z" level=info msg="StartContainer for \"abd5f3ab2c376d54cf542752444174be3ec28827ab4ea5104e52aba33700d557\""
	Mar 27 22:52:18 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:18.081808430Z" level=info msg="StartContainer for \"abd5f3ab2c376d54cf542752444174be3ec28827ab4ea5104e52aba33700d557\" returns successfully"
	Mar 27 22:52:18 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:18.113365795Z" level=info msg="shim disconnected" id=abd5f3ab2c376d54cf542752444174be3ec28827ab4ea5104e52aba33700d557
	Mar 27 22:52:18 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:18.113435824Z" level=warning msg="cleaning up after shim disconnected" id=abd5f3ab2c376d54cf542752444174be3ec28827ab4ea5104e52aba33700d557 namespace=k8s.io
	Mar 27 22:52:18 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:18.113449419Z" level=info msg="cleaning up dead shim"
	Mar 27 22:52:18 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:18.125331612Z" level=warning msg="cleanup warnings time=\"2024-03-27T22:52:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2993 runtime=io.containerd.runc.v2\n"
	Mar 27 22:52:18 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:18.556415076Z" level=info msg="RemoveContainer for \"de3590b46dfc05f9c2e8a55ac3f7e3cd763903a4e7e698736137c2a24811a060\""
	Mar 27 22:52:18 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:52:18.569034102Z" level=info msg="RemoveContainer for \"de3590b46dfc05f9c2e8a55ac3f7e3cd763903a4e7e698736137c2a24811a060\" returns successfully"
	Mar 27 22:53:17 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:17.957958948Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:53:17 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:17.972676862Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 27 22:53:17 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:17.983320261Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 27 22:53:45 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:45.963789149Z" level=info msg="CreateContainer within sandbox \"67752281548d6d0ec64b9f0477c95a34bae8f550c4fa66e9ce0cacc34a20002c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 27 22:53:45 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:45.987818625Z" level=info msg="CreateContainer within sandbox \"67752281548d6d0ec64b9f0477c95a34bae8f550c4fa66e9ce0cacc34a20002c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd\""
	Mar 27 22:53:45 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:45.988660449Z" level=info msg="StartContainer for \"f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd\""
	Mar 27 22:53:46 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:46.059086571Z" level=info msg="StartContainer for \"f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd\" returns successfully"
	Mar 27 22:53:46 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:46.083963778Z" level=info msg="shim disconnected" id=f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:53:46 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:46.084028834Z" level=warning msg="cleaning up after shim disconnected" id=f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd namespace=k8s.io
	Mar 27 22:53:46 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:46.084041142Z" level=info msg="cleaning up dead shim"
	Mar 27 22:53:46 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:46.092305379Z" level=warning msg="cleanup warnings time=\"2024-03-27T22:53:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3243 runtime=io.containerd.runc.v2\n"
	Mar 27 22:53:46 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:46.750227632Z" level=info msg="RemoveContainer for \"abd5f3ab2c376d54cf542752444174be3ec28827ab4ea5104e52aba33700d557\""
	Mar 27 22:53:46 old-k8s-version-195171 containerd[570]: time="2024-03-27T22:53:46.756580184Z" level=info msg="RemoveContainer for \"abd5f3ab2c376d54cf542752444174be3ec28827ab4ea5104e52aba33700d557\" returns successfully"
	
	
	==> coredns [84593e4d98d96f90ffb2680fe57238a5c4cf80bf817dae59479a0c7a089ad239] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:39460 - 39692 "HINFO IN 1495832843458285059.8598672652714269592. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04064945s
	
	
	==> coredns [99bf898f9c9cb67d469fa99cefeb6d0e14b65dced65891b8706534caba3c6c80] <==
	I0327 22:50:47.866508       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-27 22:50:17.865987557 +0000 UTC m=+0.021206039) (total time: 30.000347894s):
	Trace[2019727887]: [30.000347894s] [30.000347894s] END
	E0327 22:50:47.866667       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0327 22:50:47.866912       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-27 22:50:17.86606008 +0000 UTC m=+0.021278562) (total time: 30.00081902s):
	Trace[939984059]: [30.00081902s] [30.00081902s] END
	E0327 22:50:47.866926       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0327 22:50:47.867297       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-27 22:50:17.866004795 +0000 UTC m=+0.021223286) (total time: 30.001271241s):
	Trace[911902081]: [30.001271241s] [30.001271241s] END
	E0327 22:50:47.867309       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:36702 - 21927 "HINFO IN 3980509901408872320.6611293485139233094. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077536222s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-195171
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-195171
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81
	                    minikube.k8s.io/name=old-k8s-version-195171
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T22_47_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 22:47:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-195171
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 22:55:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 22:51:05 +0000   Wed, 27 Mar 2024 22:47:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 22:51:05 +0000   Wed, 27 Mar 2024 22:47:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 22:51:05 +0000   Wed, 27 Mar 2024 22:47:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 22:51:05 +0000   Wed, 27 Mar 2024 22:47:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-195171
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 5164705227f34a13bc6da7f0faad1e66
	  System UUID:                b9d78c07-560d-4a8f-92e0-7b900fe83760
	  Boot ID:                    3ced2ab6-f576-451e-8762-49421fd13f89
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 coredns-74ff55c5b-fnmsr                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m8s
	  kube-system                 etcd-old-k8s-version-195171                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-tshsk                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m8s
	  kube-system                 kube-apiserver-old-k8s-version-195171             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-controller-manager-old-k8s-version-195171    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-proxy-9vmnf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kube-scheduler-old-k8s-version-195171             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 metrics-server-9975d5f86-6qllz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m26s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-pmtpv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-cflzk               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m35s (x5 over 8m35s)  kubelet     Node old-k8s-version-195171 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s (x4 over 8m35s)  kubelet     Node old-k8s-version-195171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s (x5 over 8m35s)  kubelet     Node old-k8s-version-195171 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m15s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet     Node old-k8s-version-195171 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet     Node old-k8s-version-195171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s                  kubelet     Node old-k8s-version-195171 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m8s                   kubelet     Node old-k8s-version-195171 status is now: NodeReady
	  Normal  Starting                 8m7s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m56s)  kubelet     Node old-k8s-version-195171 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x7 over 5m56s)  kubelet     Node old-k8s-version-195171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x8 over 5m56s)  kubelet     Node old-k8s-version-195171 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m40s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001048] FS-Cache: O-key=[8] '1a75ed0000000000'
	[  +0.000693] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000b6bf5524{9p.inode} n=00000000584c6a56
	[  +0.001108] FS-Cache: N-key=[8] '1a75ed0000000000'
	[  +0.003717] FS-Cache: Duplicate cookie detected
	[  +0.000688] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001167] FS-Cache: O-cookie d=00000000b6bf5524{9p.inode} n=00000000d2921005
	[  +0.001162] FS-Cache: O-key=[8] '1a75ed0000000000'
	[  +0.000759] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000b6bf5524{9p.inode} n=00000000cfe805fe
	[  +0.001066] FS-Cache: N-key=[8] '1a75ed0000000000'
	[Mar27 22:12] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=00000000b6bf5524{9p.inode} n=0000000005073a3e
	[  +0.001030] FS-Cache: O-key=[8] '1975ed0000000000'
	[  +0.000768] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000919] FS-Cache: N-cookie d=00000000b6bf5524{9p.inode} n=0000000076faedbd
	[  +0.001095] FS-Cache: N-key=[8] '1975ed0000000000'
	[  +0.318423] FS-Cache: Duplicate cookie detected
	[  +0.000694] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000b6bf5524{9p.inode} n=00000000705c765f
	[  +0.001072] FS-Cache: O-key=[8] '1f75ed0000000000'
	[  +0.000787] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001007] FS-Cache: N-cookie d=00000000b6bf5524{9p.inode} n=00000000d6e7d61d
	[  +0.001117] FS-Cache: N-key=[8] '1f75ed0000000000'
	
	
	==> etcd [b6f637dff335b0a27399ab46defc429894613ed31ac5f4467f5d5fbb1a01af46] <==
	raft2024/03/27 22:47:25 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/03/27 22:47:25 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/03/27 22:47:25 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/03/27 22:47:25 INFO: ea7e25599daad906 became leader at term 2
	raft2024/03/27 22:47:25 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-03-27 22:47:25.818016 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-27 22:47:25.822006 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-27 22:47:25.822104 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-27 22:47:25.822172 I | etcdserver: published {Name:old-k8s-version-195171 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-03-27 22:47:25.822308 I | embed: ready to serve client requests
	2024-03-27 22:47:25.823660 I | embed: serving client requests on 192.168.76.2:2379
	2024-03-27 22:47:25.824967 I | embed: ready to serve client requests
	2024-03-27 22:47:25.826251 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-27 22:47:45.430068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:47:50.740219 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:48:00.740392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:48:10.740284 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:48:20.740368 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:48:30.740349 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:48:40.740388 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:48:50.740180 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:49:00.740386 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:49:10.740410 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:49:20.740458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:49:30.740455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [fc0a1b53d0f819bb09ef39e6011a38cf8d57bf010bdf13c65f8d914d850e70b2] <==
	2024-03-27 22:51:54.621327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:52:04.621295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:52:14.621222 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:52:24.621271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:52:34.621320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:52:44.621566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:52:54.621202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:53:04.621556 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:53:14.621391 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:53:24.621316 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:53:34.621302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:53:44.621510 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:53:54.621248 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:54:04.621427 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:54:14.621146 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:54:24.621203 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:54:34.621292 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:54:44.621331 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:54:54.621326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:55:04.621494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:55:14.621470 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:55:24.621174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:55:34.621264 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:55:44.622038 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-27 22:55:54.621492 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 22:55:59 up  6:38,  0 users,  load average: 0.74, 1.42, 2.06
	Linux old-k8s-version-195171 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3de67807d1a06a97c2e60ef29c464d561186bc98db3ba0d1f9fa213c5c80c65d] <==
	I0327 22:53:58.066323       1 main.go:227] handling current node
	I0327 22:54:08.074543       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:54:08.074759       1 main.go:227] handling current node
	I0327 22:54:18.082826       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:54:18.082854       1 main.go:227] handling current node
	I0327 22:54:28.105542       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:54:28.105641       1 main.go:227] handling current node
	I0327 22:54:38.111502       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:54:38.111529       1 main.go:227] handling current node
	I0327 22:54:48.125826       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:54:48.125864       1 main.go:227] handling current node
	I0327 22:54:58.141332       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:54:58.141366       1 main.go:227] handling current node
	I0327 22:55:08.154557       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:55:08.154588       1 main.go:227] handling current node
	I0327 22:55:18.160550       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:55:18.160583       1 main.go:227] handling current node
	I0327 22:55:28.176537       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:55:28.176573       1 main.go:227] handling current node
	I0327 22:55:38.187625       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:55:38.187653       1 main.go:227] handling current node
	I0327 22:55:48.194144       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:55:48.194179       1 main.go:227] handling current node
	I0327 22:55:58.212397       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:55:58.212426       1 main.go:227] handling current node
	
	
	==> kindnet [d0b987e0a8e4f9e88a3d46d048c3c86ea1dcbd737d51ec21e43dd7730e08a76a] <==
	I0327 22:47:52.736269       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0327 22:47:52.736379       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0327 22:47:52.738531       1 main.go:116] setting mtu 1500 for CNI 
	I0327 22:47:52.738553       1 main.go:146] kindnetd IP family: "ipv4"
	I0327 22:47:52.738567       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0327 22:48:22.978576       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0327 22:48:22.992778       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:48:22.993269       1 main.go:227] handling current node
	I0327 22:48:33.019570       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:48:33.019611       1 main.go:227] handling current node
	I0327 22:48:43.041681       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:48:43.041713       1 main.go:227] handling current node
	I0327 22:48:53.046036       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:48:53.046065       1 main.go:227] handling current node
	I0327 22:49:03.067115       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:49:03.067149       1 main.go:227] handling current node
	I0327 22:49:13.088292       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:49:13.088322       1 main.go:227] handling current node
	I0327 22:49:23.100549       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:49:23.100643       1 main.go:227] handling current node
	I0327 22:49:33.131108       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0327 22:49:33.131138       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2e1958328a33b420b6569e61d7a6b2e67e8002d40f7a1d0d7a4cf58319d03267] <==
	I0327 22:52:19.891756       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:52:19.891862       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0327 22:52:58.315384       1 client.go:360] parsed scheme: "passthrough"
	I0327 22:52:58.315431       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:52:58.315462       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0327 22:53:18.255305       1 handler_proxy.go:102] no RequestInfo found in the context
	E0327 22:53:18.255397       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0327 22:53:18.255413       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0327 22:53:38.196616       1 client.go:360] parsed scheme: "passthrough"
	I0327 22:53:38.196663       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:53:38.196673       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0327 22:54:19.880716       1 client.go:360] parsed scheme: "passthrough"
	I0327 22:54:19.880763       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:54:19.880772       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0327 22:55:01.223089       1 client.go:360] parsed scheme: "passthrough"
	I0327 22:55:01.223130       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:55:01.223139       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0327 22:55:16.524304       1 handler_proxy.go:102] no RequestInfo found in the context
	E0327 22:55:16.524493       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0327 22:55:16.524560       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0327 22:55:44.976049       1 client.go:360] parsed scheme: "passthrough"
	I0327 22:55:44.976112       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:55:44.976121       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [6d7bb312fba1e741ab92c5eb9a9f90d53b215e3c7a64f3a6790bb38e841c76ee] <==
	I0327 22:47:33.668545       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0327 22:47:33.674903       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0327 22:47:33.679537       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0327 22:47:33.679572       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0327 22:47:34.144399       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0327 22:47:34.192014       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0327 22:47:34.304877       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0327 22:47:34.305984       1 controller.go:606] quota admission added evaluator for: endpoints
	I0327 22:47:34.310015       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0327 22:47:35.387271       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0327 22:47:35.995991       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0327 22:47:36.073072       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0327 22:47:44.530947       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 22:47:51.323424       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0327 22:47:51.630964       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0327 22:48:03.902909       1 client.go:360] parsed scheme: "passthrough"
	I0327 22:48:03.902954       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:48:03.902963       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0327 22:48:47.117752       1 client.go:360] parsed scheme: "passthrough"
	I0327 22:48:47.117801       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:48:47.117810       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0327 22:49:30.988780       1 upgradeaware.go:373] Error proxying data from client to backend: write tcp 192.168.76.2:42742->192.168.76.2:10250: write: broken pipe
	I0327 22:49:31.136478       1 client.go:360] parsed scheme: "passthrough"
	I0327 22:49:31.136713       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0327 22:49:31.136822       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [99491c7efa6ca74287288d2ca9dd78b3a2a5c1d801b69057db17e9237ab0e681] <==
	I0327 22:47:51.407410       1 range_allocator.go:373] Set node old-k8s-version-195171 PodCIDR to [10.244.0.0/24]
	I0327 22:47:51.417296       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-fnmsr"
	I0327 22:47:51.425931       1 shared_informer.go:247] Caches are synced for endpoint 
	I0327 22:47:51.437515       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0327 22:47:51.437794       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0327 22:47:51.438105       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0327 22:47:51.439148       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0327 22:47:51.465150       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-c29j8"
	I0327 22:47:51.495859       1 shared_informer.go:247] Caches are synced for disruption 
	I0327 22:47:51.495886       1 disruption.go:339] Sending events to api server.
	I0327 22:47:51.535900       1 shared_informer.go:247] Caches are synced for stateful set 
	I0327 22:47:51.553575       1 shared_informer.go:247] Caches are synced for resource quota 
	I0327 22:47:51.583753       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0327 22:47:51.586810       1 shared_informer.go:247] Caches are synced for resource quota 
	I0327 22:47:51.743971       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0327 22:47:51.795013       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9vmnf"
	I0327 22:47:51.795102       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tshsk"
	E0327 22:47:51.864559       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"bcfab06c-d2eb-41b3-92f9-ad4ba4717751", ResourceVersion:"269", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63847176456, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001cd7860), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001cd7880)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001cd78a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001cd5640), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001cd7
8c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001cd78e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001cd7920)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001ccb3e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400101e078), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000b3bc00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400071f958)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400101e0c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0327 22:47:52.043599       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0327 22:47:52.043631       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0327 22:47:52.044078       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0327 22:47:53.021905       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0327 22:47:53.039256       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-c29j8"
	I0327 22:47:56.335865       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0327 22:49:32.444060       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [f569ef0c355df6fd4fc804073ad9b3d47dba586babdf6903d408aafbd07b1049] <==
	W0327 22:51:39.705069       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0327 22:52:05.031958       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0327 22:52:11.355672       1 request.go:655] Throttling request took 1.048301879s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W0327 22:52:12.207124       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0327 22:52:35.535603       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0327 22:52:43.857747       1 request.go:655] Throttling request took 1.04826529s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0327 22:52:44.709373       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0327 22:53:06.037839       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0327 22:53:16.359775       1 request.go:655] Throttling request took 1.048218732s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0327 22:53:17.211334       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0327 22:53:36.539689       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0327 22:53:48.861744       1 request.go:655] Throttling request took 1.048421111s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0327 22:53:49.713159       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0327 22:54:07.041702       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0327 22:54:21.363554       1 request.go:655] Throttling request took 1.047912946s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
	W0327 22:54:22.215241       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0327 22:54:37.543585       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0327 22:54:53.865792       1 request.go:655] Throttling request took 1.048323182s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0327 22:54:54.717298       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0327 22:55:08.046103       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0327 22:55:26.367891       1 request.go:655] Throttling request took 1.047477764s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0327 22:55:27.219503       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0327 22:55:38.548271       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0327 22:55:59.020904       1 request.go:655] Throttling request took 1.0188811s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W0327 22:55:59.722218       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [3242e57c5b9958b451fec095ec4db56864f938b6b05a2d40b3e313e85436386f] <==
	I0327 22:47:52.726307       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0327 22:47:52.726397       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0327 22:47:52.789619       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0327 22:47:52.789708       1 server_others.go:185] Using iptables Proxier.
	I0327 22:47:52.790051       1 server.go:650] Version: v1.20.0
	I0327 22:47:52.792110       1 config.go:315] Starting service config controller
	I0327 22:47:52.792120       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0327 22:47:52.792137       1 config.go:224] Starting endpoint slice config controller
	I0327 22:47:52.792140       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0327 22:47:52.895886       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0327 22:47:52.895956       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [c476de2402c209bf0de9037eb9f3c324981378d17d4be3cc70e7610117f74d75] <==
	I0327 22:50:19.398873       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0327 22:50:19.399045       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0327 22:50:19.415484       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0327 22:50:19.415742       1 server_others.go:185] Using iptables Proxier.
	I0327 22:50:19.416288       1 server.go:650] Version: v1.20.0
	I0327 22:50:19.417051       1 config.go:315] Starting service config controller
	I0327 22:50:19.418582       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0327 22:50:19.417314       1 config.go:224] Starting endpoint slice config controller
	I0327 22:50:19.418831       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0327 22:50:19.518896       1 shared_informer.go:247] Caches are synced for service config 
	I0327 22:50:19.518968       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [756e890221cd204dc8073b60566f5e086ab44aa426abefcfc1ae8de0695ffe2a] <==
	I0327 22:50:09.259290       1 serving.go:331] Generated self-signed cert in-memory
	W0327 22:50:15.420715       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0327 22:50:15.420745       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 22:50:15.420762       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0327 22:50:15.420768       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0327 22:50:15.695449       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0327 22:50:15.705080       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0327 22:50:15.705109       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0327 22:50:15.705132       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0327 22:50:15.806535       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [ad81f139258693b946d350f892a0a317ad916974d01cf75227e4cbfb1120ad94] <==
	W0327 22:47:32.910486       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 22:47:32.910501       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0327 22:47:32.910506       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0327 22:47:32.978182       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0327 22:47:32.985724       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0327 22:47:32.985755       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0327 22:47:32.985785       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0327 22:47:33.002264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 22:47:33.002364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 22:47:33.002492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 22:47:33.002587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 22:47:33.003739       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 22:47:33.003886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 22:47:33.003993       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 22:47:33.005268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 22:47:33.005638       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 22:47:33.005675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 22:47:33.005918       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 22:47:33.006273       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 22:47:33.837895       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 22:47:33.838121       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 22:47:33.885483       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 22:47:33.889390       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 22:47:33.976265       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0327 22:47:36.585911       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 27 22:54:09 old-k8s-version-195171 kubelet[666]: E0327 22:54:09.957348     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	Mar 27 22:54:21 old-k8s-version-195171 kubelet[666]: E0327 22:54:21.957462     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:54:23 old-k8s-version-195171 kubelet[666]: I0327 22:54:23.957138     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:54:23 old-k8s-version-195171 kubelet[666]: E0327 22:54:23.957514     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	Mar 27 22:54:34 old-k8s-version-195171 kubelet[666]: I0327 22:54:34.956689     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:54:34 old-k8s-version-195171 kubelet[666]: E0327 22:54:34.957067     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	Mar 27 22:54:34 old-k8s-version-195171 kubelet[666]: E0327 22:54:34.958263     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:54:47 old-k8s-version-195171 kubelet[666]: E0327 22:54:47.960877     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:54:49 old-k8s-version-195171 kubelet[666]: I0327 22:54:49.956730     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:54:49 old-k8s-version-195171 kubelet[666]: E0327 22:54:49.957053     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	Mar 27 22:54:58 old-k8s-version-195171 kubelet[666]: E0327 22:54:58.957440     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:55:03 old-k8s-version-195171 kubelet[666]: I0327 22:55:03.962045     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:55:03 old-k8s-version-195171 kubelet[666]: E0327 22:55:03.963334     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	Mar 27 22:55:10 old-k8s-version-195171 kubelet[666]: E0327 22:55:10.957711     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:55:16 old-k8s-version-195171 kubelet[666]: I0327 22:55:16.956788     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:55:16 old-k8s-version-195171 kubelet[666]: E0327 22:55:16.957191     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	Mar 27 22:55:25 old-k8s-version-195171 kubelet[666]: E0327 22:55:25.961185     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:55:28 old-k8s-version-195171 kubelet[666]: I0327 22:55:28.958913     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:55:28 old-k8s-version-195171 kubelet[666]: E0327 22:55:28.964031     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	Mar 27 22:55:38 old-k8s-version-195171 kubelet[666]: E0327 22:55:38.957450     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:55:43 old-k8s-version-195171 kubelet[666]: I0327 22:55:43.957202     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:55:43 old-k8s-version-195171 kubelet[666]: E0327 22:55:43.957605     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	Mar 27 22:55:53 old-k8s-version-195171 kubelet[666]: E0327 22:55:53.961537     666 pod_workers.go:191] Error syncing pod c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e ("metrics-server-9975d5f86-6qllz_kube-system(c21a5ce4-dcf7-4f80-a87f-9dbb489daf2e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 27 22:55:54 old-k8s-version-195171 kubelet[666]: I0327 22:55:54.956696     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: f907a721684f6adfb1cde8bb163767fca3a9ec0e0973b00af8d0a2062e1867dd
	Mar 27 22:55:54 old-k8s-version-195171 kubelet[666]: E0327 22:55:54.957098     666 pod_workers.go:191] Error syncing pod dafb20ca-4906-4b83-9320-3a640f8faf3e ("dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmtpv_kubernetes-dashboard(dafb20ca-4906-4b83-9320-3a640f8faf3e)"
	
	
	==> kubernetes-dashboard [727ba30d5a28abb5c60d75f4b8540f5351812fd2401dc97a994eb0bb694e57fe] <==
	2024/03/27 22:50:37 Using namespace: kubernetes-dashboard
	2024/03/27 22:50:37 Using in-cluster config to connect to apiserver
	2024/03/27 22:50:37 Using secret token for csrf signing
	2024/03/27 22:50:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/27 22:50:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/27 22:50:37 Successful initial request to the apiserver, version: v1.20.0
	2024/03/27 22:50:37 Generating JWE encryption key
	2024/03/27 22:50:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/27 22:50:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/27 22:50:37 Initializing JWE encryption key from synchronized object
	2024/03/27 22:50:37 Creating in-cluster Sidecar client
	2024/03/27 22:50:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:50:37 Serving insecurely on HTTP port: 9090
	2024/03/27 22:51:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:51:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:52:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:52:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:53:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:53:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:54:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:54:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:55:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:55:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/27 22:50:37 Starting overwatch
	
	
	==> storage-provisioner [a6f0eaa04a1629cd4415df54b145c25c5c9c19b03373308fe9e59e05c27d176f] <==
	I0327 22:50:17.408631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0327 22:50:47.411414       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d801b298852bd17bcc4648b83a9cc84a9cca57433e0871456478a81642159943] <==
	I0327 22:51:00.212311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 22:51:00.276924       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 22:51:00.277230       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 22:51:17.821647       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 22:51:17.838680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-195171_159a60dd-e7af-40d0-b82e-204fd4c8a59f!
	I0327 22:51:17.839486       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"92a17c64-87ec-4e9c-a736-c41762120b0e", APIVersion:"v1", ResourceVersion:"847", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-195171_159a60dd-e7af-40d0-b82e-204fd4c8a59f became leader
	I0327 22:51:17.939643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-195171_159a60dd-e7af-40d0-b82e-204fd4c8a59f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-195171 -n old-k8s-version-195171
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-195171 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-6qllz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-195171 describe pod metrics-server-9975d5f86-6qllz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-195171 describe pod metrics-server-9975d5f86-6qllz: exit status 1 (131.025425ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-6qllz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-195171 describe pod metrics-server-9975d5f86-6qllz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.75s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.77
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 7.56
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.09
18 TestDownloadOnly/v1.29.3/DeleteAll 0.23
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.30.0-beta.0/json-events 7.02
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.24
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.38
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.25
30 TestBinaryMirror 0.55
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 140.78
38 TestAddons/parallel/Registry 16.52
40 TestAddons/parallel/InspektorGadget 11.88
41 TestAddons/parallel/MetricsServer 6.88
44 TestAddons/parallel/CSI 64.19
45 TestAddons/parallel/Headlamp 12.19
46 TestAddons/parallel/CloudSpanner 6.61
47 TestAddons/parallel/LocalPath 53.6
48 TestAddons/parallel/NvidiaDevicePlugin 5.53
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.28
54 TestCertOptions 37.6
55 TestCertExpiration 227.8
57 TestForceSystemdFlag 39.43
58 TestForceSystemdEnv 42.39
59 TestDockerEnvContainerd 47.24
64 TestErrorSpam/setup 29.55
65 TestErrorSpam/start 0.74
66 TestErrorSpam/status 1
67 TestErrorSpam/pause 1.72
68 TestErrorSpam/unpause 1.81
69 TestErrorSpam/stop 1.44
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 82.13
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.11
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.11
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.99
81 TestFunctional/serial/CacheCmd/cache/add_local 1.59
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
86 TestFunctional/serial/CacheCmd/cache/delete 0.15
87 TestFunctional/serial/MinikubeKubectlCmd 0.16
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
89 TestFunctional/serial/ExtraConfig 43.41
90 TestFunctional/serial/ComponentHealth 0.12
91 TestFunctional/serial/LogsCmd 1.78
92 TestFunctional/serial/LogsFileCmd 1.8
93 TestFunctional/serial/InvalidService 4.36
95 TestFunctional/parallel/ConfigCmd 0.57
96 TestFunctional/parallel/DashboardCmd 7.89
97 TestFunctional/parallel/DryRun 0.57
98 TestFunctional/parallel/InternationalLanguage 0.35
99 TestFunctional/parallel/StatusCmd 1.18
103 TestFunctional/parallel/ServiceCmdConnect 9.72
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 29.91
107 TestFunctional/parallel/SSHCmd 0.63
108 TestFunctional/parallel/CpCmd 2.02
110 TestFunctional/parallel/FileSync 0.34
111 TestFunctional/parallel/CertSync 2.16
115 TestFunctional/parallel/NodeLabels 0.11
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
119 TestFunctional/parallel/License 0.68
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.36
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
132 TestFunctional/parallel/ServiceCmd/List 0.48
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
135 TestFunctional/parallel/ServiceCmd/Format 0.41
136 TestFunctional/parallel/ServiceCmd/URL 0.38
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
138 TestFunctional/parallel/ProfileCmd/profile_list 0.38
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
140 TestFunctional/parallel/MountCmd/any-port 7.53
141 TestFunctional/parallel/MountCmd/specific-port 2.39
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.46
143 TestFunctional/parallel/Version/short 0.07
144 TestFunctional/parallel/Version/components 1.38
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.82
150 TestFunctional/parallel/ImageCommands/Setup 2.58
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/StartCluster 130.71
168 TestMultiControlPlane/serial/DeployApp 19.65
169 TestMultiControlPlane/serial/PingHostFromPods 1.79
170 TestMultiControlPlane/serial/AddWorkerNode 23.95
171 TestMultiControlPlane/serial/NodeLabels 0.12
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
173 TestMultiControlPlane/serial/CopyFile 19.87
174 TestMultiControlPlane/serial/StopSecondaryNode 12.9
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.61
176 TestMultiControlPlane/serial/RestartSecondaryNode 18.32
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 143.13
179 TestMultiControlPlane/serial/DeleteSecondaryNode 11.76
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
181 TestMultiControlPlane/serial/StopCluster 36.08
182 TestMultiControlPlane/serial/RestartCluster 82.72
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
184 TestMultiControlPlane/serial/AddSecondaryNode 46.12
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
189 TestJSONOutput/start/Command 87.73
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.73
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.71
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.91
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.24
214 TestKicCustomNetwork/create_custom_network 40.76
215 TestKicCustomNetwork/use_default_bridge_network 31.68
216 TestKicExistingNetwork 32.52
217 TestKicCustomSubnet 35.33
218 TestKicStaticIP 36.37
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 69.25
223 TestMountStart/serial/StartWithMountFirst 6.31
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 6.59
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.6
228 TestMountStart/serial/VerifyMountPostDelete 0.29
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 7.22
231 TestMountStart/serial/VerifyMountPostStop 0.27
234 TestMultiNode/serial/FreshStart2Nodes 94.11
235 TestMultiNode/serial/DeployApp2Nodes 4.19
236 TestMultiNode/serial/PingHostFrom2Pods 1.18
237 TestMultiNode/serial/AddNode 17.05
238 TestMultiNode/serial/MultiNodeLabels 0.11
239 TestMultiNode/serial/ProfileList 0.45
240 TestMultiNode/serial/CopyFile 10.68
241 TestMultiNode/serial/StopNode 2.25
242 TestMultiNode/serial/StartAfterStop 9.35
243 TestMultiNode/serial/RestartKeepsNodes 83.11
244 TestMultiNode/serial/DeleteNode 5.42
245 TestMultiNode/serial/StopMultiNode 24.05
246 TestMultiNode/serial/RestartMultiNode 46.57
247 TestMultiNode/serial/ValidateNameConflict 35.88
252 TestPreload 108.07
254 TestScheduledStopUnix 108.15
257 TestInsufficientStorage 10.71
258 TestRunningBinaryUpgrade 89.82
260 TestKubernetesUpgrade 369.34
261 TestMissingContainerUpgrade 175.39
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 37.11
265 TestNoKubernetes/serial/StartWithStopK8s 16.18
266 TestNoKubernetes/serial/Start 8.04
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
268 TestNoKubernetes/serial/ProfileList 1.04
269 TestNoKubernetes/serial/Stop 1.22
270 TestNoKubernetes/serial/StartNoArgs 7.11
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
272 TestStoppedBinaryUpgrade/Setup 1.11
273 TestStoppedBinaryUpgrade/Upgrade 104.07
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
283 TestPause/serial/Start 84.35
284 TestPause/serial/SecondStartNoReconfiguration 6.83
285 TestPause/serial/Pause 0.93
286 TestPause/serial/VerifyStatus 0.38
287 TestPause/serial/Unpause 0.81
288 TestPause/serial/PauseAgain 1.31
289 TestPause/serial/DeletePaused 3.11
290 TestPause/serial/VerifyDeletedResources 3.79
298 TestNetworkPlugins/group/false 5.48
303 TestStartStop/group/old-k8s-version/serial/FirstStart 151.75
304 TestStartStop/group/old-k8s-version/serial/DeployApp 8.95
306 TestStartStop/group/no-preload/serial/FirstStart 73.63
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.05
308 TestStartStop/group/old-k8s-version/serial/Stop 13.75
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.49
311 TestStartStop/group/no-preload/serial/DeployApp 9.39
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
313 TestStartStop/group/no-preload/serial/Stop 12.21
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/no-preload/serial/SecondStart 267.16
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
319 TestStartStop/group/no-preload/serial/Pause 3.84
321 TestStartStop/group/embed-certs/serial/FirstStart 86.9
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.1
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.15
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
325 TestStartStop/group/old-k8s-version/serial/Pause 3.79
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.66
328 TestStartStop/group/embed-certs/serial/DeployApp 9.37
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
330 TestStartStop/group/embed-certs/serial/Stop 12.09
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/embed-certs/serial/SecondStart 267.76
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.43
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.62
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 291.31
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
341 TestStartStop/group/embed-certs/serial/Pause 3.18
343 TestStartStop/group/newest-cni/serial/FirstStart 45.71
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
348 TestStartStop/group/newest-cni/serial/Stop 1.3
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.57
351 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
352 TestStartStop/group/newest-cni/serial/SecondStart 18.83
353 TestNetworkPlugins/group/auto/Start 94.62
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
357 TestStartStop/group/newest-cni/serial/Pause 4.06
358 TestNetworkPlugins/group/kindnet/Start 91.5
359 TestNetworkPlugins/group/auto/KubeletFlags 0.32
360 TestNetworkPlugins/group/auto/NetCatPod 10.28
361 TestNetworkPlugins/group/auto/DNS 0.2
362 TestNetworkPlugins/group/auto/Localhost 0.15
363 TestNetworkPlugins/group/auto/HairPin 0.17
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
366 TestNetworkPlugins/group/kindnet/NetCatPod 9.43
367 TestNetworkPlugins/group/kindnet/DNS 0.28
368 TestNetworkPlugins/group/kindnet/Localhost 0.2
369 TestNetworkPlugins/group/kindnet/HairPin 0.2
370 TestNetworkPlugins/group/calico/Start 84.53
371 TestNetworkPlugins/group/custom-flannel/Start 63.27
372 TestNetworkPlugins/group/calico/ControllerPod 6.05
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
375 TestNetworkPlugins/group/calico/KubeletFlags 0.46
376 TestNetworkPlugins/group/calico/NetCatPod 9.4
377 TestNetworkPlugins/group/custom-flannel/DNS 0.22
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
380 TestNetworkPlugins/group/calico/DNS 0.22
381 TestNetworkPlugins/group/calico/Localhost 0.18
382 TestNetworkPlugins/group/calico/HairPin 0.16
383 TestNetworkPlugins/group/enable-default-cni/Start 100.9
384 TestNetworkPlugins/group/flannel/Start 72.44
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
387 TestNetworkPlugins/group/flannel/NetCatPod 8.27
388 TestNetworkPlugins/group/flannel/DNS 0.18
389 TestNetworkPlugins/group/flannel/Localhost 0.16
390 TestNetworkPlugins/group/flannel/HairPin 0.17
391 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
392 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.35
393 TestNetworkPlugins/group/bridge/Start 89.05
394 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
395 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
396 TestNetworkPlugins/group/enable-default-cni/HairPin 0.3
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
398 TestNetworkPlugins/group/bridge/NetCatPod 10.28
399 TestNetworkPlugins/group/bridge/DNS 0.18
400 TestNetworkPlugins/group/bridge/Localhost 0.15
401 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-422079 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-422079 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.767704596s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-422079
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-422079: exit status 85 (86.145315ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-422079 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |          |
	|         | -p download-only-422079        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 22:02:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 22:02:02.200459 1416132 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:02:02.200689 1416132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:02:02.200715 1416132 out.go:304] Setting ErrFile to fd 2...
	I0327 22:02:02.200734 1416132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:02:02.201018 1416132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	W0327 22:02:02.201187 1416132 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17735-1410709/.minikube/config/config.json: open /home/jenkins/minikube-integration/17735-1410709/.minikube/config/config.json: no such file or directory
	I0327 22:02:02.201644 1416132 out.go:298] Setting JSON to true
	I0327 22:02:02.202659 1416132 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20660,"bootTime":1711556262,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:02:02.202759 1416132 start.go:139] virtualization:  
	I0327 22:02:02.206210 1416132 out.go:97] [download-only-422079] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 22:02:02.208520 1416132 out.go:169] MINIKUBE_LOCATION=17735
	W0327 22:02:02.206464 1416132 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 22:02:02.206506 1416132 notify.go:220] Checking for updates...
	I0327 22:02:02.210821 1416132 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:02:02.212897 1416132 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:02:02.215002 1416132 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:02:02.217690 1416132 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 22:02:02.221311 1416132 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 22:02:02.221599 1416132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:02:02.239997 1416132 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:02:02.240108 1416132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:02:02.302241 1416132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 22:02:02.292755218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:02:02.302355 1416132 docker.go:295] overlay module found
	I0327 22:02:02.305238 1416132 out.go:97] Using the docker driver based on user configuration
	I0327 22:02:02.305302 1416132 start.go:297] selected driver: docker
	I0327 22:02:02.305311 1416132 start.go:901] validating driver "docker" against <nil>
	I0327 22:02:02.305436 1416132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:02:02.358664 1416132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 22:02:02.348771316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:02:02.358846 1416132 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 22:02:02.359164 1416132 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 22:02:02.359314 1416132 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 22:02:02.361674 1416132 out.go:169] Using Docker driver with root privileges
	I0327 22:02:02.363755 1416132 cni.go:84] Creating CNI manager for ""
	I0327 22:02:02.363778 1416132 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:02:02.363788 1416132 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 22:02:02.363870 1416132 start.go:340] cluster config:
	{Name:download-only-422079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-422079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:02:02.365625 1416132 out.go:97] Starting "download-only-422079" primary control-plane node in "download-only-422079" cluster
	I0327 22:02:02.365654 1416132 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 22:02:02.367546 1416132 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 22:02:02.367576 1416132 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 22:02:02.367751 1416132 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 22:02:02.380494 1416132 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 22:02:02.381243 1416132 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 22:02:02.381351 1416132 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 22:02:02.444252 1416132 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0327 22:02:02.444278 1416132 cache.go:56] Caching tarball of preloaded images
	I0327 22:02:02.444434 1416132 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 22:02:02.446793 1416132 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 22:02:02.446821 1416132 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0327 22:02:02.560151 1416132 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-422079 host does not exist
	  To start a cluster, run: "minikube start -p download-only-422079"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-422079
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (7.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-223339 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-223339 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.555242986s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (7.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-223339
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-223339: exit status 85 (90.53178ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-422079 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | -p download-only-422079        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| delete  | -p download-only-422079        | download-only-422079 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| start   | -o=json --download-only        | download-only-223339 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | -p download-only-223339        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 22:02:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 22:02:10.400765 1416300 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:02:10.400983 1416300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:02:10.401010 1416300 out.go:304] Setting ErrFile to fd 2...
	I0327 22:02:10.401031 1416300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:02:10.401301 1416300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:02:10.401750 1416300 out.go:298] Setting JSON to true
	I0327 22:02:10.402805 1416300 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20668,"bootTime":1711556262,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:02:10.402901 1416300 start.go:139] virtualization:  
	I0327 22:02:10.405699 1416300 out.go:97] [download-only-223339] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 22:02:10.408083 1416300 out.go:169] MINIKUBE_LOCATION=17735
	I0327 22:02:10.405944 1416300 notify.go:220] Checking for updates...
	I0327 22:02:10.410347 1416300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:02:10.412660 1416300 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:02:10.415094 1416300 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:02:10.417566 1416300 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 22:02:10.421776 1416300 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 22:02:10.422086 1416300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:02:10.440418 1416300 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:02:10.440525 1416300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:02:10.501599 1416300 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-03-27 22:02:10.492207692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:02:10.501714 1416300 docker.go:295] overlay module found
	I0327 22:02:10.504237 1416300 out.go:97] Using the docker driver based on user configuration
	I0327 22:02:10.504268 1416300 start.go:297] selected driver: docker
	I0327 22:02:10.504275 1416300 start.go:901] validating driver "docker" against <nil>
	I0327 22:02:10.504387 1416300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:02:10.556964 1416300 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-03-27 22:02:10.548271395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:02:10.557146 1416300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 22:02:10.557438 1416300 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 22:02:10.557591 1416300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 22:02:10.560342 1416300 out.go:169] Using Docker driver with root privileges
	I0327 22:02:10.562624 1416300 cni.go:84] Creating CNI manager for ""
	I0327 22:02:10.562645 1416300 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:02:10.562654 1416300 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 22:02:10.562748 1416300 start.go:340] cluster config:
	{Name:download-only-223339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-223339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:02:10.565439 1416300 out.go:97] Starting "download-only-223339" primary control-plane node in "download-only-223339" cluster
	I0327 22:02:10.565459 1416300 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 22:02:10.567568 1416300 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 22:02:10.567608 1416300 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 22:02:10.567704 1416300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 22:02:10.580592 1416300 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 22:02:10.580735 1416300 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 22:02:10.580755 1416300 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 22:02:10.580761 1416300 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 22:02:10.580780 1416300 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 22:02:10.636431 1416300 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0327 22:02:10.636462 1416300 cache.go:56] Caching tarball of preloaded images
	I0327 22:02:10.636642 1416300 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 22:02:10.639324 1416300 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 22:02:10.639350 1416300 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 ...
	I0327 22:02:10.762344 1416300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:663a9a795decbfebeb48b89f3f24d179 -> /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-223339 host does not exist
	  To start a cluster, run: "minikube start -p download-only-223339"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-223339
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (7.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-296933 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-296933 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.021409091s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (7.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-296933
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-296933: exit status 85 (237.755793ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-422079 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | -p download-only-422079             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| delete  | -p download-only-422079             | download-only-422079 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| start   | -o=json --download-only             | download-only-223339 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | -p download-only-223339             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| delete  | -p download-only-223339             | download-only-223339 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC | 27 Mar 24 22:02 UTC |
	| start   | -o=json --download-only             | download-only-296933 | jenkins | v1.33.0-beta.0 | 27 Mar 24 22:02 UTC |                     |
	|         | -p download-only-296933             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 22:02:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 22:02:18.419637 1416464 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:02:18.419845 1416464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:02:18.419856 1416464 out.go:304] Setting ErrFile to fd 2...
	I0327 22:02:18.419861 1416464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:02:18.420136 1416464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:02:18.420562 1416464 out.go:298] Setting JSON to true
	I0327 22:02:18.421496 1416464 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20676,"bootTime":1711556262,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:02:18.421576 1416464 start.go:139] virtualization:  
	I0327 22:02:18.424131 1416464 out.go:97] [download-only-296933] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 22:02:18.426117 1416464 out.go:169] MINIKUBE_LOCATION=17735
	I0327 22:02:18.424351 1416464 notify.go:220] Checking for updates...
	I0327 22:02:18.430008 1416464 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:02:18.432457 1416464 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:02:18.434622 1416464 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:02:18.436845 1416464 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 22:02:18.440922 1416464 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 22:02:18.441223 1416464 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:02:18.461764 1416464 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:02:18.461877 1416464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:02:18.532603 1416464 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 22:02:18.523476655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:02:18.532718 1416464 docker.go:295] overlay module found
	I0327 22:02:18.535248 1416464 out.go:97] Using the docker driver based on user configuration
	I0327 22:02:18.535290 1416464 start.go:297] selected driver: docker
	I0327 22:02:18.535298 1416464 start.go:901] validating driver "docker" against <nil>
	I0327 22:02:18.535419 1416464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:02:18.588105 1416464 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 22:02:18.578131404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:02:18.588283 1416464 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 22:02:18.588615 1416464 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 22:02:18.588774 1416464 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 22:02:18.592904 1416464 out.go:169] Using Docker driver with root privileges
	I0327 22:02:18.594850 1416464 cni.go:84] Creating CNI manager for ""
	I0327 22:02:18.594872 1416464 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 22:02:18.594882 1416464 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 22:02:18.594974 1416464 start.go:340] cluster config:
	{Name:download-only-296933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-296933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:02:18.596943 1416464 out.go:97] Starting "download-only-296933" primary control-plane node in "download-only-296933" cluster
	I0327 22:02:18.596964 1416464 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 22:02:18.598829 1416464 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 22:02:18.598856 1416464 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 22:02:18.599016 1416464 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 22:02:18.613249 1416464 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 22:02:18.613379 1416464 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 22:02:18.613403 1416464 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 22:02:18.613408 1416464 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 22:02:18.613420 1416464 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 22:02:18.664998 1416464 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I0327 22:02:18.665025 1416464 cache.go:56] Caching tarball of preloaded images
	I0327 22:02:18.665216 1416464 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 22:02:18.667945 1416464 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 22:02:18.667979 1416464 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	I0327 22:02:18.785107 1416464 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:f676343275e1172ac594af64d6d0592a -> /home/jenkins/minikube-integration/17735-1410709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-296933 host does not exist
	  To start a cluster, run: "minikube start -p download-only-296933"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-296933
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-257059 --alsologtostderr --binary-mirror http://127.0.0.1:41911 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-257059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-257059
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-135346
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-135346: exit status 85 (90.554148ms)

                                                
                                                
-- stdout --
	* Profile "addons-135346" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-135346"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-135346
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-135346: exit status 85 (83.958414ms)

                                                
                                                
-- stdout --
	* Profile "addons-135346" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-135346"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (140.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-135346 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-135346 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m20.783562286s)
--- PASS: TestAddons/Setup (140.78s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 45.640188ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xxnz7" [97d2d3a2-6694-404e-af05-f67f3d8df2fd] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005075969s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8lb9d" [ce79ef2c-7855-4f60-b346-89e09148c93c] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.007759558s
addons_test.go:340: (dbg) Run:  kubectl --context addons-135346 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-135346 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-135346 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.367055127s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 ip
2024/03/27 22:05:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-24x97" [7cdd08f2-fb40-4ba3-bd74-4afead8e962b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004711458s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-135346
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-135346: (5.876616541s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.576668ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-7c8jn" [f9a04f3b-1673-4048-884b-d69a6e500b20] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005505878s
addons_test.go:415: (dbg) Run:  kubectl --context addons-135346 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 46.403351ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-135346 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-135346 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e6e60b5b-4c57-49db-a34c-3b7a34a296b2] Pending
helpers_test.go:344: "task-pv-pod" [e6e60b5b-4c57-49db-a34c-3b7a34a296b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e6e60b5b-4c57-49db-a34c-3b7a34a296b2] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003559248s
addons_test.go:584: (dbg) Run:  kubectl --context addons-135346 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-135346 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-135346 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-135346 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-135346 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-135346 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-135346 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [db6b5aae-4bdb-4817-8953-b02c5d1d28a3] Pending
helpers_test.go:344: "task-pv-pod-restore" [db6b5aae-4bdb-4817-8953-b02c5d1d28a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [db6b5aae-4bdb-4817-8953-b02c5d1d28a3] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.003770502s
addons_test.go:626: (dbg) Run:  kubectl --context addons-135346 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-135346 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-135346 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-135346 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.923771039s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-135346 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-135346 --alsologtostderr -v=1: (1.190380349s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-74cxp" [89c9b0ad-84ed-40f5-a9bd-b210e64850d5] Pending
helpers_test.go:344: "headlamp-5485c556b-74cxp" [89c9b0ad-84ed-40f5-a9bd-b210e64850d5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-74cxp" [89c9b0ad-84ed-40f5-a9bd-b210e64850d5] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00353171s
--- PASS: TestAddons/parallel/Headlamp (12.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-5rrq9" [8aedc78a-3c45-48c5-9416-a37242667b3f] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003781075s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-135346
--- PASS: TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-135346 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-135346 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135346 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dbcf233e-6195-421d-ae9e-0ad77d7b7076] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dbcf233e-6195-421d-ae9e-0ad77d7b7076] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dbcf233e-6195-421d-ae9e-0ad77d7b7076] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00611312s
addons_test.go:891: (dbg) Run:  kubectl --context addons-135346 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 ssh "cat /opt/local-path-provisioner/pvc-6ca7a1bb-695c-4033-b70f-ede35e872947_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-135346 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-135346 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-135346 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-135346 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.277044221s)
--- PASS: TestAddons/parallel/LocalPath (53.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6r9b9" [7678e1f1-ecdb-4201-aaa3-0b78cfb78319] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004292161s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-135346
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-jjjdt" [8cd5d894-486d-4e32-8672-a8b9e55fc3c8] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003724319s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-135346 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-135346 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-135346
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-135346: (11.982586664s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-135346
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-135346
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-135346
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (37.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-876699 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-876699 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.950838052s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-876699 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-876699 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-876699 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-876699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-876699
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-876699: (1.988173386s)
--- PASS: TestCertOptions (37.60s)

                                                
                                    
x
+
TestCertExpiration (227.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-348519 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-348519 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.796512261s)
E0327 22:46:26.358540 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-348519 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-348519 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.703347677s)
helpers_test.go:175: Cleaning up "cert-expiration-348519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-348519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-348519: (2.295928575s)
--- PASS: TestCertExpiration (227.80s)

                                                
                                    
x
+
TestForceSystemdFlag (39.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-662526 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-662526 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.874929961s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-662526 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-662526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-662526
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-662526: (2.172645873s)
--- PASS: TestForceSystemdFlag (39.43s)

                                                
                                    
x
+
TestForceSystemdEnv (42.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-151767 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-151767 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.662301044s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-151767 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-151767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-151767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-151767: (2.241211392s)
--- PASS: TestForceSystemdEnv (42.39s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.24s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-277247 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-277247 --driver=docker  --container-runtime=containerd: (31.255026458s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-277247"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-277247": (1.164010738s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-eWT5m50gjEMb/agent.1434281" SSH_AGENT_PID="1434282" DOCKER_HOST=ssh://docker@127.0.0.1:34305 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-eWT5m50gjEMb/agent.1434281" SSH_AGENT_PID="1434282" DOCKER_HOST=ssh://docker@127.0.0.1:34305 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-eWT5m50gjEMb/agent.1434281" SSH_AGENT_PID="1434282" DOCKER_HOST=ssh://docker@127.0.0.1:34305 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.464445124s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-eWT5m50gjEMb/agent.1434281" SSH_AGENT_PID="1434282" DOCKER_HOST=ssh://docker@127.0.0.1:34305 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-277247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-277247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-277247: (1.946601018s)
--- PASS: TestDockerEnvContainerd (47.24s)

                                                
                                    
x
+
TestErrorSpam/setup (29.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-658085 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-658085 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-658085 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-658085 --driver=docker  --container-runtime=containerd: (29.551564013s)
--- PASS: TestErrorSpam/setup (29.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 stop: (1.236455744s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-658085 --log_dir /tmp/nospam-658085 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17735-1410709/.minikube/files/etc/test/nested/copy/1416127/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-057506 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0327 22:09:48.785856 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:48.791591 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:48.801934 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:48.822224 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:48.862465 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:48.942865 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:49.103348 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:49.423999 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:50.064646 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:51.344875 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:53.906819 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:09:59.027633 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:10:09.268582 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-057506 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m22.129683504s)
--- PASS: TestFunctional/serial/StartWithProxy (82.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-057506 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-057506 --alsologtostderr -v=8: (6.103110795s)
functional_test.go:659: soft start took 6.112954769s for "functional-057506" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-057506 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 cache add registry.k8s.io/pause:3.1: (1.518170866s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 cache add registry.k8s.io/pause:3.3: (1.293468307s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 cache add registry.k8s.io/pause:latest: (1.182016536s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-057506 /tmp/TestFunctionalserialCacheCmdcacheadd_local3096647769/001
E0327 22:10:29.748767 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cache add minikube-local-cache-test:functional-057506
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cache delete minikube-local-cache-test:functional-057506
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-057506
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.712482ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 cache reload: (1.135128781s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 kubectl -- --context functional-057506 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-057506 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-057506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0327 22:11:10.709267 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-057506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.4057513s)
functional_test.go:757: restart took 43.405854633s for "functional-057506" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-057506 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 logs: (1.780046356s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 logs --file /tmp/TestFunctionalserialLogsFileCmd2456067353/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 logs --file /tmp/TestFunctionalserialLogsFileCmd2456067353/001/logs.txt: (1.802354631s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-057506 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-057506
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-057506: exit status 115 (441.92695ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30194 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-057506 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 config get cpus: exit status 14 (100.734206ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 config get cpus: exit status 14 (82.800311ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-057506 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-057506 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1448397: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-057506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-057506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (292.434295ms)

                                                
                                                
-- stdout --
	* [functional-057506] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 22:11:57.910333 1448023 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:11:57.910528 1448023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:11:57.910552 1448023 out.go:304] Setting ErrFile to fd 2...
	I0327 22:11:57.910569 1448023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:11:57.910834 1448023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:11:57.911241 1448023 out.go:298] Setting JSON to false
	I0327 22:11:57.912285 1448023 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":21256,"bootTime":1711556262,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:11:57.912364 1448023 start.go:139] virtualization:  
	I0327 22:11:57.915040 1448023 out.go:177] * [functional-057506] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 22:11:57.918113 1448023 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 22:11:57.920035 1448023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:11:57.918246 1448023 notify.go:220] Checking for updates...
	I0327 22:11:57.923605 1448023 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:11:57.925543 1448023 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:11:57.927264 1448023 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 22:11:57.929059 1448023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 22:11:57.931302 1448023 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:11:57.931859 1448023 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:11:57.963821 1448023 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:11:57.963942 1448023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:11:58.075073 1448023 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-27 22:11:58.064502096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:11:58.075189 1448023 docker.go:295] overlay module found
	I0327 22:11:58.078009 1448023 out.go:177] * Using the docker driver based on existing profile
	I0327 22:11:58.080103 1448023 start.go:297] selected driver: docker
	I0327 22:11:58.080126 1448023 start.go:901] validating driver "docker" against &{Name:functional-057506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-057506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:11:58.080241 1448023 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 22:11:58.082868 1448023 out.go:177] 
	W0327 22:11:58.085413 1448023 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 22:11:58.087504 1448023 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-057506 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-057506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-057506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (348.734414ms)

                                                
                                                
-- stdout --
	* [functional-057506] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 22:11:57.591364 1447863 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:11:57.592224 1447863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:11:57.592237 1447863 out.go:304] Setting ErrFile to fd 2...
	I0327 22:11:57.592243 1447863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:11:57.594651 1447863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:11:57.597742 1447863 out.go:298] Setting JSON to false
	I0327 22:11:57.606881 1447863 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":21256,"bootTime":1711556262,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:11:57.608067 1447863 start.go:139] virtualization:  
	I0327 22:11:57.612099 1447863 out.go:177] * [functional-057506] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0327 22:11:57.614524 1447863 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 22:11:57.614594 1447863 notify.go:220] Checking for updates...
	I0327 22:11:57.622171 1447863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:11:57.625674 1447863 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:11:57.628072 1447863 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:11:57.630334 1447863 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 22:11:57.632558 1447863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 22:11:57.635689 1447863 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:11:57.636198 1447863 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:11:57.659866 1447863 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:11:57.660000 1447863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:11:57.781312 1447863 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-27 22:11:57.771574243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:11:57.781426 1447863 docker.go:295] overlay module found
	I0327 22:11:57.784717 1447863 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0327 22:11:57.787945 1447863 start.go:297] selected driver: docker
	I0327 22:11:57.787963 1447863 start.go:901] validating driver "docker" against &{Name:functional-057506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-057506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 22:11:57.788083 1447863 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 22:11:57.791142 1447863 out.go:177] 
	W0327 22:11:57.793694 1447863 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 22:11:57.795839 1447863 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-057506 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-057506 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-wnpw4" [45f0326b-bd3c-46f3-af3b-f41a41392beb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-wnpw4" [45f0326b-bd3c-46f3-af3b-f41a41392beb] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004578321s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31080
functional_test.go:1671: http://192.168.49.2:31080: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-wnpw4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31080
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [05136c16-9d37-4ee2-919d-3aab3923da91] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004519983s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-057506 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-057506 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-057506 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-057506 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-057506 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-057506 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c212b255-3c28-4609-bfc5-846492f103ad] Pending
helpers_test.go:344: "sp-pod" [c212b255-3c28-4609-bfc5-846492f103ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c212b255-3c28-4609-bfc5-846492f103ad] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003988784s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-057506 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-057506 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-057506 delete -f testdata/storage-provisioner/pod.yaml: (1.323400298s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-057506 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [af8a6fb0-8c41-444c-a2bb-6a8dd9107cdc] Pending
helpers_test.go:344: "sp-pod" [af8a6fb0-8c41-444c-a2bb-6a8dd9107cdc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005448585s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-057506 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh -n functional-057506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cp functional-057506:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1731667906/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh -n functional-057506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh -n functional-057506 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1416127/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo cat /etc/test/nested/copy/1416127/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1416127.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo cat /etc/ssl/certs/1416127.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1416127.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo cat /usr/share/ca-certificates/1416127.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14161272.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo cat /etc/ssl/certs/14161272.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14161272.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo cat /usr/share/ca-certificates/14161272.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-057506 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 ssh "sudo systemctl is-active docker": exit status 1 (283.651815ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 ssh "sudo systemctl is-active crio": exit status 1 (363.046419ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-057506 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-057506 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-057506 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-057506 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1445825: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-057506 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-057506 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1ee2cec6-9156-4918-836d-2a3c0a91ffb1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1ee2cec6-9156-4918-836d-2a3c0a91ffb1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004654721s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-057506 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.9.162 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-057506 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-057506 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-057506 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-797cg" [24cb0653-c817-43e7-a6dd-427020b831cb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-797cg" [24cb0653-c817-43e7-a6dd-427020b831cb] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004488415s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 service list -o json
functional_test.go:1490: Took "505.156333ms" to run "out/minikube-linux-arm64 -p functional-057506 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32489
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32489
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "323.662137ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "60.203553ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "321.120512ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "74.296051ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdany-port3059620025/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711577515870926348" to /tmp/TestFunctionalparallelMountCmdany-port3059620025/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711577515870926348" to /tmp/TestFunctionalparallelMountCmdany-port3059620025/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711577515870926348" to /tmp/TestFunctionalparallelMountCmdany-port3059620025/001/test-1711577515870926348
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (432.353879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 27 22:11 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 27 22:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 27 22:11 test-1711577515870926348
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh cat /mount-9p/test-1711577515870926348
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-057506 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2108c277-21a3-4440-9e32-077a9e55eaa4] Pending
helpers_test.go:344: "busybox-mount" [2108c277-21a3-4440-9e32-077a9e55eaa4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2108c277-21a3-4440-9e32-077a9e55eaa4] Running
helpers_test.go:344: "busybox-mount" [2108c277-21a3-4440-9e32-077a9e55eaa4] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2108c277-21a3-4440-9e32-077a9e55eaa4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003666679s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-057506 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdany-port3059620025/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdspecific-port1264553084/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (539.043033ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdspecific-port1264553084/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 ssh "sudo umount -f /mount-9p": exit status 1 (316.197143ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-057506 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdspecific-port1264553084/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2717468010/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2717468010/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2717468010/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T" /mount1
2024/03/27 22:12:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T" /mount1: exit status 1 (704.15727ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-057506 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2717468010/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2717468010/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-057506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2717468010/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 version -o=json --components: (1.384514994s)
--- PASS: TestFunctional/parallel/Version/components (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-057506 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-057506
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-057506 image ls --format short --alsologtostderr:
I0327 22:12:26.260036 1450930 out.go:291] Setting OutFile to fd 1 ...
I0327 22:12:26.269539 1450930 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.269607 1450930 out.go:304] Setting ErrFile to fd 2...
I0327 22:12:26.269628 1450930 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.270015 1450930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
I0327 22:12:26.271433 1450930 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.271787 1450930 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.272292 1450930 cli_runner.go:164] Run: docker container inspect functional-057506 --format={{.State.Status}}
I0327 22:12:26.289664 1450930 ssh_runner.go:195] Run: systemctl --version
I0327 22:12:26.289724 1450930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-057506
I0327 22:12:26.308662 1450930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/functional-057506/id_rsa Username:docker}
I0327 22:12:26.405279 1450930 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-057506 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:258111 | 32.1MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | latest             | sha256:070027 | 67.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:121d70 | 30.6MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:0e9b4a | 25MB   |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| docker.io/library/minikube-local-cache-test | functional-057506  | sha256:69040b | 991B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:4b51f9 | 16.9MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/nginx                     | alpine             | sha256:b8c826 | 17.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-057506 image ls --format table --alsologtostderr:
I0327 22:12:26.536796 1450986 out.go:291] Setting OutFile to fd 1 ...
I0327 22:12:26.536965 1450986 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.536983 1450986 out.go:304] Setting ErrFile to fd 2...
I0327 22:12:26.536988 1450986 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.537259 1450986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
I0327 22:12:26.537982 1450986 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.538122 1450986 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.538679 1450986 cli_runner.go:164] Run: docker container inspect functional-057506 --format={{.State.Status}}
I0327 22:12:26.559748 1450986 ssh_runner.go:195] Run: systemctl --version
I0327 22:12:26.559811 1450986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-057506
I0327 22:12:26.575022 1450986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/functional-057506/id_rsa Username:docker}
I0327 22:12:26.667679 1450986 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-057506 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"30578527"},{"id":"sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"25039677"},{"id":"sha256:070027a3cbe09ac697570e31174acc169
9701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216851"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"16931371"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kuber
netesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601398"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/
coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"32143347"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec
1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:69040b9a1b6e37fd2f7d2b9b0dab933a6baa440e07b0e36419113b108520b060","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-057506"],"size":"991"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-057506 image ls --format json --alsologtostderr:
I0327 22:12:26.261760 1450931 out.go:291] Setting OutFile to fd 1 ...
I0327 22:12:26.270367 1450931 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.270804 1450931 out.go:304] Setting ErrFile to fd 2...
I0327 22:12:26.270813 1450931 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.271131 1450931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
I0327 22:12:26.271828 1450931 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.271938 1450931 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.272365 1450931 cli_runner.go:164] Run: docker container inspect functional-057506 --format={{.State.Status}}
I0327 22:12:26.287011 1450931 ssh_runner.go:195] Run: systemctl --version
I0327 22:12:26.287070 1450931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-057506
I0327 22:12:26.302288 1450931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/functional-057506/id_rsa Username:docker}
I0327 22:12:26.390751 1450931 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-057506 image ls --format yaml --alsologtostderr:
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:69040b9a1b6e37fd2f7d2b9b0dab933a6baa440e07b0e36419113b108520b060
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-057506
size: "991"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
repoTags:
- docker.io/library/nginx:alpine
size: "17601398"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "16931371"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "67216851"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "30578527"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "32143347"
- id: sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "25039677"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-057506 image ls --format yaml --alsologtostderr:
I0327 22:12:26.816591 1451061 out.go:291] Setting OutFile to fd 1 ...
I0327 22:12:26.816739 1451061 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.816763 1451061 out.go:304] Setting ErrFile to fd 2...
I0327 22:12:26.816775 1451061 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.817054 1451061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
I0327 22:12:26.817739 1451061 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.817914 1451061 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.818495 1451061 cli_runner.go:164] Run: docker container inspect functional-057506 --format={{.State.Status}}
I0327 22:12:26.846201 1451061 ssh_runner.go:195] Run: systemctl --version
I0327 22:12:26.846273 1451061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-057506
I0327 22:12:26.872610 1451061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/functional-057506/id_rsa Username:docker}
I0327 22:12:26.971942 1451061 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-057506 ssh pgrep buildkitd: exit status 1 (345.565805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image build -t localhost/my-image:functional-057506 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-057506 image build -t localhost/my-image:functional-057506 testdata/build --alsologtostderr: (2.236069113s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-057506 image build -t localhost/my-image:functional-057506 testdata/build --alsologtostderr:
I0327 22:12:26.908611 1451076 out.go:291] Setting OutFile to fd 1 ...
I0327 22:12:26.909325 1451076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.909344 1451076 out.go:304] Setting ErrFile to fd 2...
I0327 22:12:26.909351 1451076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 22:12:26.909638 1451076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
I0327 22:12:26.910442 1451076 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.914009 1451076 config.go:182] Loaded profile config "functional-057506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 22:12:26.914871 1451076 cli_runner.go:164] Run: docker container inspect functional-057506 --format={{.State.Status}}
I0327 22:12:26.931747 1451076 ssh_runner.go:195] Run: systemctl --version
I0327 22:12:26.931809 1451076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-057506
I0327 22:12:26.948254 1451076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/functional-057506/id_rsa Username:docker}
I0327 22:12:27.043335 1451076 build_images.go:161] Building image from path: /tmp/build.3118196774.tar
I0327 22:12:27.043488 1451076 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0327 22:12:27.054274 1451076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3118196774.tar
I0327 22:12:27.058482 1451076 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3118196774.tar: stat -c "%s %y" /var/lib/minikube/build/build.3118196774.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3118196774.tar': No such file or directory
I0327 22:12:27.058515 1451076 ssh_runner.go:362] scp /tmp/build.3118196774.tar --> /var/lib/minikube/build/build.3118196774.tar (3072 bytes)
I0327 22:12:27.087732 1451076 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3118196774
I0327 22:12:27.102801 1451076 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3118196774 -xf /var/lib/minikube/build/build.3118196774.tar
I0327 22:12:27.113450 1451076 containerd.go:394] Building image: /var/lib/minikube/build/build.3118196774
I0327 22:12:27.113588 1451076 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3118196774 --local dockerfile=/var/lib/minikube/build/build.3118196774 --output type=image,name=localhost/my-image:functional-057506
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:491c63762617b2a76287992b297d49ebc9d01bc1a752853a75ad18f76fd5a319 0.0s done
#8 exporting config sha256:a53451084c53572922d7f1198bc6267808344db98afe86538b720ad69150bda0 0.0s done
#8 naming to localhost/my-image:functional-057506 done
#8 DONE 0.1s
I0327 22:12:29.041631 1451076 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3118196774 --local dockerfile=/var/lib/minikube/build/build.3118196774 --output type=image,name=localhost/my-image:functional-057506: (1.927998599s)
I0327 22:12:29.041721 1451076 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3118196774
I0327 22:12:29.051523 1451076 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3118196774.tar
I0327 22:12:29.061136 1451076 build_images.go:217] Built localhost/my-image:functional-057506 from /tmp/build.3118196774.tar
I0327 22:12:29.061165 1451076 build_images.go:133] succeeded building to: functional-057506
I0327 22:12:29.061171 1451076 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.560028223s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-057506
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image rm gcr.io/google-containers/addon-resizer:functional-057506 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-057506
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-057506 image save --daemon gcr.io/google-containers/addon-resizer:functional-057506 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-057506
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-057506
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-057506
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-057506
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (130.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-225733 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0327 22:12:32.630091 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-225733 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m9.781989337s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (130.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (19.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- rollout status deployment/busybox
E0327 22:14:48.784486 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-225733 -- rollout status deployment/busybox: (16.135108723s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-8cwsf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-clcf9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-xfnb2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-8cwsf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-clcf9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-xfnb2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-8cwsf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-clcf9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-xfnb2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (19.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-8cwsf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-8cwsf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-clcf9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-clcf9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-xfnb2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-225733 -- exec busybox-7fdf7869d9-xfnb2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-225733 -v=7 --alsologtostderr
E0327 22:15:16.470659 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-225733 -v=7 --alsologtostderr: (22.928584303s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr: (1.018760624s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-225733 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-225733 status --output json -v=7 --alsologtostderr: (1.071821834s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp testdata/cp-test.txt ha-225733:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile187835303/001/cp-test_ha-225733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733:/home/docker/cp-test.txt ha-225733-m02:/home/docker/cp-test_ha-225733_ha-225733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m02 "sudo cat /home/docker/cp-test_ha-225733_ha-225733-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733:/home/docker/cp-test.txt ha-225733-m03:/home/docker/cp-test_ha-225733_ha-225733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m03 "sudo cat /home/docker/cp-test_ha-225733_ha-225733-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733:/home/docker/cp-test.txt ha-225733-m04:/home/docker/cp-test_ha-225733_ha-225733-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m04 "sudo cat /home/docker/cp-test_ha-225733_ha-225733-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp testdata/cp-test.txt ha-225733-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile187835303/001/cp-test_ha-225733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m02:/home/docker/cp-test.txt ha-225733:/home/docker/cp-test_ha-225733-m02_ha-225733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733 "sudo cat /home/docker/cp-test_ha-225733-m02_ha-225733.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m02:/home/docker/cp-test.txt ha-225733-m03:/home/docker/cp-test_ha-225733-m02_ha-225733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m03 "sudo cat /home/docker/cp-test_ha-225733-m02_ha-225733-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m02:/home/docker/cp-test.txt ha-225733-m04:/home/docker/cp-test_ha-225733-m02_ha-225733-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m04 "sudo cat /home/docker/cp-test_ha-225733-m02_ha-225733-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp testdata/cp-test.txt ha-225733-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile187835303/001/cp-test_ha-225733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m03:/home/docker/cp-test.txt ha-225733:/home/docker/cp-test_ha-225733-m03_ha-225733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733 "sudo cat /home/docker/cp-test_ha-225733-m03_ha-225733.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m03:/home/docker/cp-test.txt ha-225733-m02:/home/docker/cp-test_ha-225733-m03_ha-225733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m02 "sudo cat /home/docker/cp-test_ha-225733-m03_ha-225733-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m03:/home/docker/cp-test.txt ha-225733-m04:/home/docker/cp-test_ha-225733-m03_ha-225733-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m04 "sudo cat /home/docker/cp-test_ha-225733-m03_ha-225733-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp testdata/cp-test.txt ha-225733-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile187835303/001/cp-test_ha-225733-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m04:/home/docker/cp-test.txt ha-225733:/home/docker/cp-test_ha-225733-m04_ha-225733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733 "sudo cat /home/docker/cp-test_ha-225733-m04_ha-225733.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m04:/home/docker/cp-test.txt ha-225733-m02:/home/docker/cp-test_ha-225733-m04_ha-225733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m02 "sudo cat /home/docker/cp-test_ha-225733-m04_ha-225733-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 cp ha-225733-m04:/home/docker/cp-test.txt ha-225733-m03:/home/docker/cp-test_ha-225733-m04_ha-225733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 ssh -n ha-225733-m03 "sudo cat /home/docker/cp-test_ha-225733-m04_ha-225733-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-225733 node stop m02 -v=7 --alsologtostderr: (12.114845828s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr: exit status 7 (789.279244ms)

                                                
                                                
-- stdout --
	ha-225733
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-225733-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-225733-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-225733-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 22:16:01.132473 1466313 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:16:01.132807 1466313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:16:01.132824 1466313 out.go:304] Setting ErrFile to fd 2...
	I0327 22:16:01.132832 1466313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:16:01.133221 1466313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:16:01.133471 1466313 out.go:298] Setting JSON to false
	I0327 22:16:01.133526 1466313 mustload.go:65] Loading cluster: ha-225733
	I0327 22:16:01.133620 1466313 notify.go:220] Checking for updates...
	I0327 22:16:01.134729 1466313 config.go:182] Loaded profile config "ha-225733": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:16:01.134758 1466313 status.go:255] checking status of ha-225733 ...
	I0327 22:16:01.135241 1466313 cli_runner.go:164] Run: docker container inspect ha-225733 --format={{.State.Status}}
	I0327 22:16:01.153811 1466313 status.go:330] ha-225733 host status = "Running" (err=<nil>)
	I0327 22:16:01.153837 1466313 host.go:66] Checking if "ha-225733" exists ...
	I0327 22:16:01.154164 1466313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-225733
	I0327 22:16:01.172895 1466313 host.go:66] Checking if "ha-225733" exists ...
	I0327 22:16:01.173271 1466313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 22:16:01.173323 1466313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-225733
	I0327 22:16:01.225723 1466313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/ha-225733/id_rsa Username:docker}
	I0327 22:16:01.331155 1466313 ssh_runner.go:195] Run: systemctl --version
	I0327 22:16:01.342157 1466313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 22:16:01.358868 1466313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:16:01.418161 1466313 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:72 SystemTime:2024-03-27 22:16:01.407144011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:16:01.418853 1466313 kubeconfig.go:125] found "ha-225733" server: "https://192.168.49.254:8443"
	I0327 22:16:01.418890 1466313 api_server.go:166] Checking apiserver status ...
	I0327 22:16:01.418937 1466313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 22:16:01.431328 1466313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup
	I0327 22:16:01.441741 1466313 api_server.go:182] apiserver freezer: "7:freezer:/docker/27add78e367ed9f80741025dae208f17947a89d663c488d2a20a63beac3d19eb/kubepods/burstable/pod21a14bae74516060642adb74847d80eb/e66c24c84d254affa4b826de78e7ed0ca9a8468f32dfde3bf85bf3985bbe0850"
	I0327 22:16:01.441820 1466313 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27add78e367ed9f80741025dae208f17947a89d663c488d2a20a63beac3d19eb/kubepods/burstable/pod21a14bae74516060642adb74847d80eb/e66c24c84d254affa4b826de78e7ed0ca9a8468f32dfde3bf85bf3985bbe0850/freezer.state
	I0327 22:16:01.451561 1466313 api_server.go:204] freezer state: "THAWED"
	I0327 22:16:01.451594 1466313 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0327 22:16:01.461064 1466313 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0327 22:16:01.461100 1466313 status.go:422] ha-225733 apiserver status = Running (err=<nil>)
	I0327 22:16:01.461140 1466313 status.go:257] ha-225733 status: &{Name:ha-225733 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 22:16:01.461166 1466313 status.go:255] checking status of ha-225733-m02 ...
	I0327 22:16:01.461535 1466313 cli_runner.go:164] Run: docker container inspect ha-225733-m02 --format={{.State.Status}}
	I0327 22:16:01.476584 1466313 status.go:330] ha-225733-m02 host status = "Stopped" (err=<nil>)
	I0327 22:16:01.476610 1466313 status.go:343] host is not running, skipping remaining checks
	I0327 22:16:01.476618 1466313 status.go:257] ha-225733-m02 status: &{Name:ha-225733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 22:16:01.476641 1466313 status.go:255] checking status of ha-225733-m03 ...
	I0327 22:16:01.477020 1466313 cli_runner.go:164] Run: docker container inspect ha-225733-m03 --format={{.State.Status}}
	I0327 22:16:01.492906 1466313 status.go:330] ha-225733-m03 host status = "Running" (err=<nil>)
	I0327 22:16:01.492931 1466313 host.go:66] Checking if "ha-225733-m03" exists ...
	I0327 22:16:01.493255 1466313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-225733-m03
	I0327 22:16:01.510119 1466313 host.go:66] Checking if "ha-225733-m03" exists ...
	I0327 22:16:01.510496 1466313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 22:16:01.510565 1466313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-225733-m03
	I0327 22:16:01.532931 1466313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/ha-225733-m03/id_rsa Username:docker}
	I0327 22:16:01.620196 1466313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 22:16:01.636370 1466313 kubeconfig.go:125] found "ha-225733" server: "https://192.168.49.254:8443"
	I0327 22:16:01.636398 1466313 api_server.go:166] Checking apiserver status ...
	I0327 22:16:01.636464 1466313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 22:16:01.648897 1466313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1322/cgroup
	I0327 22:16:01.660526 1466313 api_server.go:182] apiserver freezer: "7:freezer:/docker/7306a155969282631887f5fbe6b2ef7047a73d21d4b0d6c2fa69f895e481ab17/kubepods/burstable/podafc31536d9a5649dbeffe2d4ecd70f2c/e4fc56b4d0df420b274caaeaf58f38504bbd3eccf7b35a3035dd0001c036aa82"
	I0327 22:16:01.660636 1466313 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7306a155969282631887f5fbe6b2ef7047a73d21d4b0d6c2fa69f895e481ab17/kubepods/burstable/podafc31536d9a5649dbeffe2d4ecd70f2c/e4fc56b4d0df420b274caaeaf58f38504bbd3eccf7b35a3035dd0001c036aa82/freezer.state
	I0327 22:16:01.671674 1466313 api_server.go:204] freezer state: "THAWED"
	I0327 22:16:01.671714 1466313 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0327 22:16:01.679685 1466313 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0327 22:16:01.679716 1466313 status.go:422] ha-225733-m03 apiserver status = Running (err=<nil>)
	I0327 22:16:01.679725 1466313 status.go:257] ha-225733-m03 status: &{Name:ha-225733-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 22:16:01.679744 1466313 status.go:255] checking status of ha-225733-m04 ...
	I0327 22:16:01.680040 1466313 cli_runner.go:164] Run: docker container inspect ha-225733-m04 --format={{.State.Status}}
	I0327 22:16:01.696672 1466313 status.go:330] ha-225733-m04 host status = "Running" (err=<nil>)
	I0327 22:16:01.696698 1466313 host.go:66] Checking if "ha-225733-m04" exists ...
	I0327 22:16:01.697005 1466313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-225733-m04
	I0327 22:16:01.713325 1466313 host.go:66] Checking if "ha-225733-m04" exists ...
	I0327 22:16:01.713646 1466313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 22:16:01.713722 1466313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-225733-m04
	I0327 22:16:01.728973 1466313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34335 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/ha-225733-m04/id_rsa Username:docker}
	I0327 22:16:01.828256 1466313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 22:16:01.841320 1466313 status.go:257] ha-225733-m04 status: &{Name:ha-225733-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-225733 node start m02 -v=7 --alsologtostderr: (17.211224429s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-225733 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-225733 -v=7 --alsologtostderr
E0327 22:16:26.358630 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:26.363920 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:26.374207 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:26.394571 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:26.434858 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:26.515218 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:26.675611 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:26.996220 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:27.637286 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:28.917974 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:31.478583 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:36.599498 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:16:46.840307 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-225733 -v=7 --alsologtostderr: (26.464229996s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-225733 --wait=true -v=7 --alsologtostderr
E0327 22:17:07.320952 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:17:48.281278 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-225733 --wait=true -v=7 --alsologtostderr: (1m56.453203174s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-225733
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-225733 node delete m03 -v=7 --alsologtostderr: (10.7705116s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 stop -v=7 --alsologtostderr
E0327 22:19:10.201498 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-225733 stop -v=7 --alsologtostderr: (35.957003463s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr: exit status 7 (120.168233ms)

                                                
                                                
-- stdout --
	ha-225733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-225733-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-225733-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 22:19:33.003166 1479439 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:19:33.003420 1479439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:19:33.003432 1479439 out.go:304] Setting ErrFile to fd 2...
	I0327 22:19:33.003438 1479439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:19:33.003744 1479439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:19:33.003988 1479439 out.go:298] Setting JSON to false
	I0327 22:19:33.004040 1479439 mustload.go:65] Loading cluster: ha-225733
	I0327 22:19:33.004132 1479439 notify.go:220] Checking for updates...
	I0327 22:19:33.005152 1479439 config.go:182] Loaded profile config "ha-225733": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:19:33.005229 1479439 status.go:255] checking status of ha-225733 ...
	I0327 22:19:33.005870 1479439 cli_runner.go:164] Run: docker container inspect ha-225733 --format={{.State.Status}}
	I0327 22:19:33.024280 1479439 status.go:330] ha-225733 host status = "Stopped" (err=<nil>)
	I0327 22:19:33.024308 1479439 status.go:343] host is not running, skipping remaining checks
	I0327 22:19:33.024317 1479439 status.go:257] ha-225733 status: &{Name:ha-225733 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 22:19:33.024345 1479439 status.go:255] checking status of ha-225733-m02 ...
	I0327 22:19:33.024683 1479439 cli_runner.go:164] Run: docker container inspect ha-225733-m02 --format={{.State.Status}}
	I0327 22:19:33.041989 1479439 status.go:330] ha-225733-m02 host status = "Stopped" (err=<nil>)
	I0327 22:19:33.042022 1479439 status.go:343] host is not running, skipping remaining checks
	I0327 22:19:33.042031 1479439 status.go:257] ha-225733-m02 status: &{Name:ha-225733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 22:19:33.042064 1479439 status.go:255] checking status of ha-225733-m04 ...
	I0327 22:19:33.042594 1479439 cli_runner.go:164] Run: docker container inspect ha-225733-m04 --format={{.State.Status}}
	I0327 22:19:33.064027 1479439 status.go:330] ha-225733-m04 host status = "Stopped" (err=<nil>)
	I0327 22:19:33.064055 1479439 status.go:343] host is not running, skipping remaining checks
	I0327 22:19:33.064064 1479439 status.go:257] ha-225733-m04 status: &{Name:ha-225733-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-225733 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0327 22:19:48.785176 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-225733 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m21.68634466s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-225733 --control-plane -v=7 --alsologtostderr
E0327 22:21:26.358871 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-225733 --control-plane -v=7 --alsologtostderr: (45.058751601s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-225733 status -v=7 --alsologtostderr: (1.06500396s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-585134 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0327 22:21:54.042527 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-585134 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m27.728932247s)
--- PASS: TestJSONOutput/start/Command (87.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-585134 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-585134 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-585134 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-585134 --output=json --user=testUser: (5.906729852s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-564711 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-564711 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.977259ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bbc96f8c-27f1-438b-8219-8279805ce87e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-564711] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"64fbefbb-ba56-47d6-8670-415458eca282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17735"}}
	{"specversion":"1.0","id":"924dc072-8289-4355-97e6-858f6706c065","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"745aa9c5-b940-4450-8dc4-ddfe6e557cad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig"}}
	{"specversion":"1.0","id":"92397a3e-0b0a-42ca-a671-0d3353d96a96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube"}}
	{"specversion":"1.0","id":"8e0b98a7-b880-42df-aa49-4a5df2bd8815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9101404a-24fe-48f6-912b-41d327595480","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7b3138d2-9f3f-426b-aec5-3b753489fb8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-564711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-564711
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-385291 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-385291 --network=: (38.633464486s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-385291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-385291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-385291: (2.10260299s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.76s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-542596 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-542596 --network=bridge: (29.677879383s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-542596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-542596
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-542596: (1.988496525s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.68s)

                                                
                                    
x
+
TestKicExistingNetwork (32.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-512942 --network=existing-network
E0327 22:24:48.785113 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-512942 --network=existing-network: (30.322708797s)
helpers_test.go:175: Cleaning up "existing-network-512942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-512942
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-512942: (2.066677555s)
--- PASS: TestKicExistingNetwork (32.52s)

                                                
                                    
x
+
TestKicCustomSubnet (35.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-681365 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-681365 --subnet=192.168.60.0/24: (33.271249295s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-681365 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-681365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-681365
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-681365: (2.044123757s)
--- PASS: TestKicCustomSubnet (35.33s)

                                                
                                    
x
+
TestKicStaticIP (36.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-480216 --static-ip=192.168.200.200
E0327 22:26:11.830896 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-480216 --static-ip=192.168.200.200: (34.121171226s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-480216 ip
helpers_test.go:175: Cleaning up "static-ip-480216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-480216
E0327 22:26:26.358743 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-480216: (2.096344969s)
--- PASS: TestKicStaticIP (36.37s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-679166 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-679166 --driver=docker  --container-runtime=containerd: (31.330927792s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-682235 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-682235 --driver=docker  --container-runtime=containerd: (32.806150789s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-679166
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-682235
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-682235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-682235
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-682235: (1.939286548s)
helpers_test.go:175: Cleaning up "first-679166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-679166
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-679166: (1.991056307s)
--- PASS: TestMinikubeProfile (69.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-137460 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-137460 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.309801438s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-137460 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-151573 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-151573 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.5891293s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-151573 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-137460 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-137460 --alsologtostderr -v=5: (1.595220066s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-151573 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-151573
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-151573: (1.208553889s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-151573
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-151573: (6.219715434s)
--- PASS: TestMountStart/serial/RestartStopped (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-151573 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-164871 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-164871 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m33.603159878s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-164871 -- rollout status deployment/busybox: (2.179970346s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-985k6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-wvkhs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-985k6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-wvkhs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-985k6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-wvkhs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-985k6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-985k6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-wvkhs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-164871 -- exec busybox-7fdf7869d9-wvkhs -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.18s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-164871 -v 3 --alsologtostderr
E0327 22:29:48.784808 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-164871 -v 3 --alsologtostderr: (16.389582402s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-164871 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-linux-arm64 -p multinode-164871 status --output json --alsologtostderr: (1.072414151s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp testdata/cp-test.txt multinode-164871:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1973720991/001/cp-test_multinode-164871.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871:/home/docker/cp-test.txt multinode-164871-m02:/home/docker/cp-test_multinode-164871_multinode-164871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m02 "sudo cat /home/docker/cp-test_multinode-164871_multinode-164871-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871:/home/docker/cp-test.txt multinode-164871-m03:/home/docker/cp-test_multinode-164871_multinode-164871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m03 "sudo cat /home/docker/cp-test_multinode-164871_multinode-164871-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp testdata/cp-test.txt multinode-164871-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1973720991/001/cp-test_multinode-164871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871-m02:/home/docker/cp-test.txt multinode-164871:/home/docker/cp-test_multinode-164871-m02_multinode-164871.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871 "sudo cat /home/docker/cp-test_multinode-164871-m02_multinode-164871.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871-m02:/home/docker/cp-test.txt multinode-164871-m03:/home/docker/cp-test_multinode-164871-m02_multinode-164871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m03 "sudo cat /home/docker/cp-test_multinode-164871-m02_multinode-164871-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp testdata/cp-test.txt multinode-164871-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1973720991/001/cp-test_multinode-164871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871-m03:/home/docker/cp-test.txt multinode-164871:/home/docker/cp-test_multinode-164871-m03_multinode-164871.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871 "sudo cat /home/docker/cp-test_multinode-164871-m03_multinode-164871.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 cp multinode-164871-m03:/home/docker/cp-test.txt multinode-164871-m02:/home/docker/cp-test_multinode-164871-m03_multinode-164871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 ssh -n multinode-164871-m02 "sudo cat /home/docker/cp-test_multinode-164871-m03_multinode-164871-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-164871 node stop m03: (1.238422864s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-164871 status: exit status 7 (512.162179ms)

                                                
                                                
-- stdout --
	multinode-164871
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-164871-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-164871-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-164871 status --alsologtostderr: exit status 7 (502.006115ms)

                                                
                                                
-- stdout --
	multinode-164871
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-164871-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-164871-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 22:30:12.607659 1531170 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:30:12.607825 1531170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:30:12.607835 1531170 out.go:304] Setting ErrFile to fd 2...
	I0327 22:30:12.607841 1531170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:30:12.608118 1531170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:30:12.608303 1531170 out.go:298] Setting JSON to false
	I0327 22:30:12.608352 1531170 mustload.go:65] Loading cluster: multinode-164871
	I0327 22:30:12.608440 1531170 notify.go:220] Checking for updates...
	I0327 22:30:12.608780 1531170 config.go:182] Loaded profile config "multinode-164871": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:30:12.608799 1531170 status.go:255] checking status of multinode-164871 ...
	I0327 22:30:12.609296 1531170 cli_runner.go:164] Run: docker container inspect multinode-164871 --format={{.State.Status}}
	I0327 22:30:12.627018 1531170 status.go:330] multinode-164871 host status = "Running" (err=<nil>)
	I0327 22:30:12.627044 1531170 host.go:66] Checking if "multinode-164871" exists ...
	I0327 22:30:12.627468 1531170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-164871
	I0327 22:30:12.646914 1531170 host.go:66] Checking if "multinode-164871" exists ...
	I0327 22:30:12.647249 1531170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 22:30:12.647304 1531170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-164871
	I0327 22:30:12.667791 1531170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34440 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/multinode-164871/id_rsa Username:docker}
	I0327 22:30:12.756010 1531170 ssh_runner.go:195] Run: systemctl --version
	I0327 22:30:12.760437 1531170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 22:30:12.773900 1531170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:30:12.839158 1531170 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-03-27 22:30:12.827612489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:30:12.839893 1531170 kubeconfig.go:125] found "multinode-164871" server: "https://192.168.67.2:8443"
	I0327 22:30:12.839942 1531170 api_server.go:166] Checking apiserver status ...
	I0327 22:30:12.839999 1531170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 22:30:12.853031 1531170 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	I0327 22:30:12.863041 1531170 api_server.go:182] apiserver freezer: "7:freezer:/docker/8b4ca2baa46d4e6ae4c3b852f8d8853407fdb26b6c459d0a4db0c51cd96a0943/kubepods/burstable/podd5e9f1bafb033237fe3b2892a103bafc/9425dc59a08404e3287907ee3cdd0c495e58e6d9fe35029cc83ce5c897a45d72"
	I0327 22:30:12.863121 1531170 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8b4ca2baa46d4e6ae4c3b852f8d8853407fdb26b6c459d0a4db0c51cd96a0943/kubepods/burstable/podd5e9f1bafb033237fe3b2892a103bafc/9425dc59a08404e3287907ee3cdd0c495e58e6d9fe35029cc83ce5c897a45d72/freezer.state
	I0327 22:30:12.872247 1531170 api_server.go:204] freezer state: "THAWED"
	I0327 22:30:12.872274 1531170 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0327 22:30:12.881048 1531170 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0327 22:30:12.881081 1531170 status.go:422] multinode-164871 apiserver status = Running (err=<nil>)
	I0327 22:30:12.881097 1531170 status.go:257] multinode-164871 status: &{Name:multinode-164871 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 22:30:12.881115 1531170 status.go:255] checking status of multinode-164871-m02 ...
	I0327 22:30:12.881431 1531170 cli_runner.go:164] Run: docker container inspect multinode-164871-m02 --format={{.State.Status}}
	I0327 22:30:12.896990 1531170 status.go:330] multinode-164871-m02 host status = "Running" (err=<nil>)
	I0327 22:30:12.897023 1531170 host.go:66] Checking if "multinode-164871-m02" exists ...
	I0327 22:30:12.897353 1531170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-164871-m02
	I0327 22:30:12.912828 1531170 host.go:66] Checking if "multinode-164871-m02" exists ...
	I0327 22:30:12.913141 1531170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 22:30:12.913192 1531170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-164871-m02
	I0327 22:30:12.929450 1531170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34445 SSHKeyPath:/home/jenkins/minikube-integration/17735-1410709/.minikube/machines/multinode-164871-m02/id_rsa Username:docker}
	I0327 22:30:13.016775 1531170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 22:30:13.029222 1531170 status.go:257] multinode-164871-m02 status: &{Name:multinode-164871-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0327 22:30:13.029265 1531170 status.go:255] checking status of multinode-164871-m03 ...
	I0327 22:30:13.029812 1531170 cli_runner.go:164] Run: docker container inspect multinode-164871-m03 --format={{.State.Status}}
	I0327 22:30:13.046739 1531170 status.go:330] multinode-164871-m03 host status = "Stopped" (err=<nil>)
	I0327 22:30:13.046763 1531170 status.go:343] host is not running, skipping remaining checks
	I0327 22:30:13.046771 1531170 status.go:257] multinode-164871-m03 status: &{Name:multinode-164871-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-164871 node start m03 -v=7 --alsologtostderr: (8.597753813s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-164871
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-164871
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-164871: (25.014620942s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-164871 --wait=true -v=8 --alsologtostderr
E0327 22:31:26.358724 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-164871 --wait=true -v=8 --alsologtostderr: (57.939266851s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-164871
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-164871 node delete m03: (4.745521976s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-164871 stop: (23.856862292s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-164871 status: exit status 7 (98.448917ms)

                                                
                                                
-- stdout --
	multinode-164871
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-164871-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-164871 status --alsologtostderr: exit status 7 (96.817478ms)

                                                
                                                
-- stdout --
	multinode-164871
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-164871-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 22:32:14.949403 1538833 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:32:14.949567 1538833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:32:14.949580 1538833 out.go:304] Setting ErrFile to fd 2...
	I0327 22:32:14.949588 1538833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:32:14.949874 1538833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:32:14.950107 1538833 out.go:298] Setting JSON to false
	I0327 22:32:14.950194 1538833 mustload.go:65] Loading cluster: multinode-164871
	I0327 22:32:14.950274 1538833 notify.go:220] Checking for updates...
	I0327 22:32:14.950731 1538833 config.go:182] Loaded profile config "multinode-164871": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:32:14.950753 1538833 status.go:255] checking status of multinode-164871 ...
	I0327 22:32:14.951330 1538833 cli_runner.go:164] Run: docker container inspect multinode-164871 --format={{.State.Status}}
	I0327 22:32:14.968368 1538833 status.go:330] multinode-164871 host status = "Stopped" (err=<nil>)
	I0327 22:32:14.968392 1538833 status.go:343] host is not running, skipping remaining checks
	I0327 22:32:14.968400 1538833 status.go:257] multinode-164871 status: &{Name:multinode-164871 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 22:32:14.968433 1538833 status.go:255] checking status of multinode-164871-m02 ...
	I0327 22:32:14.968850 1538833 cli_runner.go:164] Run: docker container inspect multinode-164871-m02 --format={{.State.Status}}
	I0327 22:32:14.985496 1538833 status.go:330] multinode-164871-m02 host status = "Stopped" (err=<nil>)
	I0327 22:32:14.985516 1538833 status.go:343] host is not running, skipping remaining checks
	I0327 22:32:14.985524 1538833 status.go:257] multinode-164871-m02 status: &{Name:multinode-164871-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-164871 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0327 22:32:49.403397 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-164871 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (45.888401258s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-164871 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-164871
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-164871-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-164871-m02 --driver=docker  --container-runtime=containerd: exit status 14 (98.968681ms)

                                                
                                                
-- stdout --
	* [multinode-164871-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-164871-m02' is duplicated with machine name 'multinode-164871-m02' in profile 'multinode-164871'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-164871-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-164871-m03 --driver=docker  --container-runtime=containerd: (33.455746922s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-164871
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-164871: exit status 80 (326.586976ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-164871 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-164871-m03 already exists in multinode-164871-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-164871-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-164871-m03: (1.939264224s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.88s)

                                                
                                    
x
+
TestPreload (108.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-480409 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0327 22:34:48.785266 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-480409 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m10.105023989s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-480409 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-480409 image pull gcr.io/k8s-minikube/busybox: (1.292795823s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-480409
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-480409: (12.048415228s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-480409 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-480409 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.77997398s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-480409 image list
helpers_test.go:175: Cleaning up "test-preload-480409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-480409
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-480409: (2.446977402s)
--- PASS: TestPreload (108.07s)

                                                
                                    
x
+
TestScheduledStopUnix (108.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-841199 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-841199 --memory=2048 --driver=docker  --container-runtime=containerd: (31.576184812s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-841199 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-841199 -n scheduled-stop-841199
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-841199 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-841199 --cancel-scheduled
E0327 22:36:26.359730 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-841199 -n scheduled-stop-841199
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-841199
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-841199 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-841199
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-841199: exit status 7 (75.833229ms)

                                                
                                                
-- stdout --
	scheduled-stop-841199
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-841199 -n scheduled-stop-841199
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-841199 -n scheduled-stop-841199: exit status 7 (74.904334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-841199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-841199
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-841199: (4.894817983s)
--- PASS: TestScheduledStopUnix (108.15s)

                                                
                                    
x
+
TestInsufficientStorage (10.71s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-738297 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-738297 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.258038406s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a08bff2e-a6d6-466f-a1c3-479c77306be7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-738297] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3292378-83ac-427f-baf8-f0a6cb72ab00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17735"}}
	{"specversion":"1.0","id":"d1ef676a-f3f6-4a58-ab96-3fef4456575f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fa3c577f-a10f-49c5-9767-140fad54ae80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig"}}
	{"specversion":"1.0","id":"f4906af7-e18f-4063-830c-4f8caf20b6ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube"}}
	{"specversion":"1.0","id":"61f8219c-394c-4217-99e0-5993ec398025","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9655b056-1eb0-45ce-a06d-9a792d0cde7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f36846ce-e33c-4815-8de7-2135267c3e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"544149da-419b-4276-bdec-3ac6513f4248","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"427b0f27-c71d-41ef-98da-4b68aff93e9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"879efc1c-fecc-465e-a3a5-d251cfbff676","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6b385a1a-e5ac-4173-bc42-959a359d96a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-738297\" primary control-plane node in \"insufficient-storage-738297\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5834f9e3-8c3d-45d8-a2a6-3c380263accb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-beta.0 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"42a1ead9-f684-4ea7-91e4-fa1c62434638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd8beabd-157a-4667-94d8-9709a395acab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-738297 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-738297 --output=json --layout=cluster: exit status 7 (276.368813ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-738297","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-738297","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0327 22:37:26.150651 1556413 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-738297" does not appear in /home/jenkins/minikube-integration/17735-1410709/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-738297 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-738297 --output=json --layout=cluster: exit status 7 (276.361478ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-738297","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-738297","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0327 22:37:26.427753 1556467 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-738297" does not appear in /home/jenkins/minikube-integration/17735-1410709/kubeconfig
	E0327 22:37:26.437874 1556467 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/insufficient-storage-738297/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-738297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-738297
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-738297: (1.896099167s)
--- PASS: TestInsufficientStorage (10.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3578553533 start -p running-upgrade-797028 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0327 22:42:51.831740 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3578553533 start -p running-upgrade-797028 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.878150091s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-797028 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-797028 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.830810877s)
helpers_test.go:175: Cleaning up "running-upgrade-797028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-797028
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-797028: (2.952512956s)
--- PASS: TestRunningBinaryUpgrade (89.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (369.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-730442 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-730442 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.873200645s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-730442
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-730442: (1.341506491s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-730442 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-730442 status --format={{.Host}}: exit status 7 (105.299941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-730442 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0327 22:39:48.785302 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-730442 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m56.255771112s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-730442 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-730442 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-730442 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (104.43087ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-730442] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-730442
	    minikube start -p kubernetes-upgrade-730442 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7304422 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-730442 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-730442 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-730442 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (13.243278454s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-730442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-730442
E0327 22:44:48.785445 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-730442: (2.261195875s)
--- PASS: TestKubernetesUpgrade (369.34s)

                                                
                                    
x
+
TestMissingContainerUpgrade (175.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1510726154 start -p missing-upgrade-517826 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1510726154 start -p missing-upgrade-517826 --memory=2200 --driver=docker  --container-runtime=containerd: (1m30.209261307s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-517826
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-517826: (10.327920486s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-517826
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-517826 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-517826 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.629921348s)
helpers_test.go:175: Cleaning up "missing-upgrade-517826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-517826
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-517826: (1.992882958s)
--- PASS: TestMissingContainerUpgrade (175.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-552263 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-552263 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (86.181553ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-552263] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-552263 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-552263 --driver=docker  --container-runtime=containerd: (36.620490358s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-552263 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-552263 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-552263 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.026133155s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-552263 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-552263 status -o json: exit status 2 (279.942442ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-552263","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-552263
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-552263: (1.873861296s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-552263 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-552263 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.042260902s)
--- PASS: TestNoKubernetes/serial/Start (8.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-552263 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-552263 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.687467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-552263
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-552263: (1.220415073s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-552263 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-552263 --driver=docker  --container-runtime=containerd: (7.1094617s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-552263 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-552263 "sudo systemctl is-active --quiet service kubelet": exit status 1 (358.751167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3416589871 start -p stopped-upgrade-098167 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3416589871 start -p stopped-upgrade-098167 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.501275483s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3416589871 -p stopped-upgrade-098167 stop
E0327 22:41:26.359046 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3416589871 -p stopped-upgrade-098167 stop: (19.956340492s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-098167 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-098167 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.607169151s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-098167
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-098167: (1.132648441s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestPause/serial/Start (84.35s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-392712 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-392712 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m24.349936176s)
--- PASS: TestPause/serial/Start (84.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-392712 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-392712 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.803553277s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.83s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-392712 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-392712 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-392712 --output=json --layout=cluster: exit status 2 (379.63217ms)

                                                
                                                
-- stdout --
	{"Name":"pause-392712","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-392712","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-392712 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.31s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-392712 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-392712 --alsologtostderr -v=5: (1.311840944s)
--- PASS: TestPause/serial/PauseAgain (1.31s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.11s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-392712 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-392712 --alsologtostderr -v=5: (3.105068715s)
--- PASS: TestPause/serial/DeletePaused (3.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (3.746690409s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-392712
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-392712: exit status 1 (13.800183ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-392712: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-373803 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-373803 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (178.05304ms)

                                                
                                                
-- stdout --
	* [false-373803] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 22:45:29.351496 1596868 out.go:291] Setting OutFile to fd 1 ...
	I0327 22:45:29.351626 1596868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:45:29.351652 1596868 out.go:304] Setting ErrFile to fd 2...
	I0327 22:45:29.351658 1596868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 22:45:29.351897 1596868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-1410709/.minikube/bin
	I0327 22:45:29.352280 1596868 out.go:298] Setting JSON to false
	I0327 22:45:29.353163 1596868 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23267,"bootTime":1711556262,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0327 22:45:29.353231 1596868 start.go:139] virtualization:  
	I0327 22:45:29.355988 1596868 out.go:177] * [false-373803] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 22:45:29.358666 1596868 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 22:45:29.358749 1596868 notify.go:220] Checking for updates...
	I0327 22:45:29.362728 1596868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 22:45:29.364817 1596868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-1410709/kubeconfig
	I0327 22:45:29.366760 1596868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-1410709/.minikube
	I0327 22:45:29.368797 1596868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 22:45:29.370764 1596868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 22:45:29.372932 1596868 config.go:182] Loaded profile config "force-systemd-flag-662526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 22:45:29.373034 1596868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 22:45:29.392530 1596868 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 22:45:29.392655 1596868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 22:45:29.460289 1596868 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-03-27 22:45:29.449129556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 22:45:29.460403 1596868 docker.go:295] overlay module found
	I0327 22:45:29.462785 1596868 out.go:177] * Using the docker driver based on user configuration
	I0327 22:45:29.464834 1596868 start.go:297] selected driver: docker
	I0327 22:45:29.464855 1596868 start.go:901] validating driver "docker" against <nil>
	I0327 22:45:29.464871 1596868 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 22:45:29.467300 1596868 out.go:177] 
	W0327 22:45:29.469119 1596868 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0327 22:45:29.471214 1596868 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-373803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-373803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-373803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373803"

                                                
                                                
----------------------- debugLogs end: false-373803 [took: 5.103223341s] --------------------------------
helpers_test.go:175: Cleaning up "false-373803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-373803
--- PASS: TestNetworkPlugins/group/false (5.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (151.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-195171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-195171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m31.74547727s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (151.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-195171 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bd49575f-e700-4739-bd9c-fde10b922560] Pending
helpers_test.go:344: "busybox" [bd49575f-e700-4739-bd9c-fde10b922560] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bd49575f-e700-4739-bd9c-fde10b922560] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.014123788s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-195171 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-463483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0327 22:49:29.410559 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-463483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (1m13.625684761s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-195171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-195171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.747377766s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-195171 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-195171 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-195171 --alsologtostderr -v=3: (13.748728583s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-195171 -n old-k8s-version-195171
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-195171 -n old-k8s-version-195171: exit status 7 (188.606221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-195171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-463483 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6316241e-512a-4f5f-99c9-8852b1bf6c67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6316241e-512a-4f5f-99c9-8852b1bf6c67] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003402054s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-463483 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-463483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-463483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075243026s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-463483 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-463483 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-463483 --alsologtostderr -v=3: (12.207251248s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-463483 -n no-preload-463483
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-463483 -n no-preload-463483: exit status 7 (84.401185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-463483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-463483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0327 22:51:26.358880 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 22:54:48.785502 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-463483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (4m26.798806132s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-463483 -n no-preload-463483
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-b628s" [902a1bcd-97fc-4b46-b081-b847a2ecce9e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0036442s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-b628s" [902a1bcd-97fc-4b46-b081-b847a2ecce9e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00348754s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-463483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-463483 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-463483 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-463483 --alsologtostderr -v=1: (1.018645262s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-463483 -n no-preload-463483
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-463483 -n no-preload-463483: exit status 2 (476.119117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-463483 -n no-preload-463483
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-463483 -n no-preload-463483: exit status 2 (407.696858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-463483 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-463483 --alsologtostderr -v=1: (1.046835609s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-463483 -n no-preload-463483
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-463483 -n no-preload-463483
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-627479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-627479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m26.895877365s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-cflzk" [4b9c4156-f6c3-40dc-8fa2-fd06eaeac8a7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.098249646s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-cflzk" [4b9c4156-f6c3-40dc-8fa2-fd06eaeac8a7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.021339713s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-195171 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-195171 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-195171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-195171 -n old-k8s-version-195171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-195171 -n old-k8s-version-195171: exit status 2 (380.833905ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-195171 -n old-k8s-version-195171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-195171 -n old-k8s-version-195171: exit status 2 (409.419483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-195171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-195171 -n old-k8s-version-195171
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-195171 -n old-k8s-version-195171
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-419041 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0327 22:56:26.359434 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-419041 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m28.664524473s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-627479 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [619e4df5-c4e7-48fd-aa8f-b866553c6361] Pending
helpers_test.go:344: "busybox" [619e4df5-c4e7-48fd-aa8f-b866553c6361] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [619e4df5-c4e7-48fd-aa8f-b866553c6361] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005034573s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-627479 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-627479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-627479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032316149s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-627479 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-627479 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-627479 --alsologtostderr -v=3: (12.091926894s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-627479 -n embed-certs-627479
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-627479 -n embed-certs-627479: exit status 7 (89.732946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-627479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-627479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-627479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m27.350894441s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-627479 -n embed-certs-627479
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-419041 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c6bd8170-fd36-485d-9ffc-249397770063] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c6bd8170-fd36-485d-9ffc-249397770063] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.009783829s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-419041 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-419041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-419041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03790512s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-419041 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-419041 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-419041 --alsologtostderr -v=3: (12.618693922s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041: exit status 7 (112.591522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-419041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-419041 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0327 22:59:22.443502 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:22.448803 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:22.459183 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:22.479462 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:22.519857 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:22.599973 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:22.760574 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:23.081133 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:23.721363 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:25.001734 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:27.562002 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:31.832687 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 22:59:32.682846 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:42.923983 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 22:59:48.784521 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 23:00:03.404158 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 23:00:42.752135 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:42.757423 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:42.767687 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:42.787949 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:42.828187 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:42.908514 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:43.068973 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:43.389512 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:44.030592 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:44.365041 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 23:00:45.311761 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:47.872375 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:00:52.993602 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:01:03.233781 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:01:23.714527 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:01:26.359505 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 23:02:04.675371 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:02:06.285267 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-419041 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m50.945023225s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bst8p" [58d14ca3-fe5b-4c5f-a382-50b51a19f35c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00395268s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bst8p" [58d14ca3-fe5b-4c5f-a382-50b51a19f35c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003903848s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-627479 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-627479 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-627479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-627479 -n embed-certs-627479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-627479 -n embed-certs-627479: exit status 2 (337.61464ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-627479 -n embed-certs-627479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-627479 -n embed-certs-627479: exit status 2 (329.233664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-627479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-627479 -n embed-certs-627479
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-627479 -n embed-certs-627479
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-340432 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-340432 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (45.711092244s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cf8wd" [49c4aa84-7e8a-4146-b9aa-27400fb02e6a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004188091s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cf8wd" [49c4aa84-7e8a-4146-b9aa-27400fb02e6a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004281548s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-419041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-340432 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-340432 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.269721234s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-340432 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-340432 --alsologtostderr -v=3: (1.297308149s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-419041 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-419041 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-419041 --alsologtostderr -v=1: (1.040625147s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041: exit status 2 (475.633827ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041: exit status 2 (463.404093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-419041 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-419041 -n default-k8s-diff-port-419041
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-340432 -n newest-cni-340432
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-340432 -n newest-cni-340432: exit status 7 (93.525791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-340432 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-340432 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-340432 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (18.372053472s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-340432 -n newest-cni-340432
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0327 23:03:26.595782 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m34.620816186s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-340432 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-340432 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-340432 -n newest-cni-340432
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-340432 -n newest-cni-340432: exit status 2 (397.310563ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-340432 -n newest-cni-340432
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-340432 -n newest-cni-340432: exit status 2 (403.065952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-340432 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-340432 --alsologtostderr -v=1: (1.014262254s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-340432 -n newest-cni-340432
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-340432 -n newest-cni-340432
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.06s)
E0327 23:09:56.846991 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:56.852199 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:56.862438 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:56.882683 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:56.922905 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:57.003772 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:57.163983 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:57.484262 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:58.124847 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:09:59.405089 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:10:01.966139 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:10:07.087272 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:10:12.910499 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:12.916066 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:12.926316 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:12.946614 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:12.986883 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:13.067186 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:13.227365 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:13.547922 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:14.190634 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:15.471539 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:17.327584 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:10:18.032686 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:23.153795 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:33.393955 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory
E0327 23:10:34.853593 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:10:37.808416 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/auto-373803/client.crt: no such file or directory
E0327 23:10:42.752110 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:10:53.875182 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/kindnet-373803/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0327 23:04:22.444049 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
E0327 23:04:48.785049 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/addons-135346/client.crt: no such file or directory
E0327 23:04:50.126261 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m31.498832787s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-373803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-373803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-94rzh" [69ee0b46-bbc7-4011-88ce-ce1ac4231cd4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-94rzh" [69ee0b46-bbc7-4011-88ce-ce1ac4231cd4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004948985s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-373803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-h2gbp" [0ae8afe1-47d8-4197-b64e-a3d6408995fc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00458046s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-373803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-373803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xjbg5" [4f7d2563-d5f5-4a73-af9d-36786c2933aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xjbg5" [4f7d2563-d5f5-4a73-af9d-36786c2933aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006137957s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-373803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0327 23:05:42.752618 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m24.530557808s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0327 23:06:09.410803 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
E0327 23:06:10.436163 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/no-preload-463483/client.crt: no such file or directory
E0327 23:06:26.358607 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/functional-057506/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.269101286s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hqfxc" [28ec4d62-80d9-49d2-9fcb-04a7febe022c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.043799525s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-373803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-373803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w6sc9" [47c5d0a0-da9e-4780-b0bb-c8e627fec459] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w6sc9" [47c5d0a0-da9e-4780-b0bb-c8e627fec459] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003724017s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-373803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-373803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t228d" [6ae9de41-0c2b-4869-b317-b639b912a52a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t228d" [6ae9de41-0c2b-4869-b317-b639b912a52a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004870208s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-373803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-373803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m40.897402026s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0327 23:07:51.002123 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:51.007397 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:51.018166 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:51.038592 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:51.082364 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:51.166543 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:51.327090 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:51.647553 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:52.288693 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:53.569278 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:07:56.129712 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:08:01.250709 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:08:11.490927 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
E0327 23:08:31.972066 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/default-k8s-diff-port-419041/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m12.444752074s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-89d7t" [bc841bbf-ef7b-435b-863b-9b7a283e48d9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004466236s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-373803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-373803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lhlrd" [2148d7e5-0325-44c3-9b5b-166b0900a04b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lhlrd" [2148d7e5-0325-44c3-9b5b-166b0900a04b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004531859s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-373803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-373803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-373803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9jf6p" [961427f1-6dbe-4f1c-b8e3-c0f8fabb9d04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9jf6p" [961427f1-6dbe-4f1c-b8e3-c0f8fabb9d04] Running
E0327 23:09:22.444066 1416127 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/old-k8s-version-195171/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007506218s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-373803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m29.053375138s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-373803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-373803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-373803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zvx2p" [f4e3cb0d-f865-4cf1-873d-3cda2de03130] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zvx2p" [f4e3cb0d-f865-4cf1-873d-3cda2de03130] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004585457s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-373803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-373803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-140125 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-140125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-140125
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-803212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-803212
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-373803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-373803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17735-1410709/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 27 Mar 2024 22:45:25 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-662526
contexts:
- context:
cluster: force-systemd-flag-662526
extensions:
- extension:
last-update: Wed, 27 Mar 2024 22:45:25 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: force-systemd-flag-662526
name: force-systemd-flag-662526
current-context: force-systemd-flag-662526
kind: Config
preferences: {}
users:
- name: force-systemd-flag-662526
user:
client-certificate: /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/force-systemd-flag-662526/client.crt
client-key: /home/jenkins/minikube-integration/17735-1410709/.minikube/profiles/force-systemd-flag-662526/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-373803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373803"

                                                
                                                
----------------------- debugLogs end: kubenet-373803 [took: 5.424372319s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-373803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-373803
--- SKIP: TestNetworkPlugins/group/kubenet (5.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-373803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-373803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-373803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-373803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373803"

                                                
                                                
----------------------- debugLogs end: cilium-373803 [took: 6.068680282s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-373803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-373803
--- SKIP: TestNetworkPlugins/group/cilium (6.33s)

                                                
                                    
Copied to clipboard