Test Report: Docker_Linux_containerd_arm64 18158

                    
                      89f58c8e24abd36bf3098da28321dad15a54de9c:2024-03-28:33771
                    
                

Test fail (8/335)

x
+
TestAddons/parallel/Ingress (37.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-482679 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-482679 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-482679 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8de7098c-741a-408b-803c-cbc05d796925] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8de7098c-741a-408b-803c-cbc05d796925] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003244567s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-482679 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.078290287s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-482679 addons disable ingress-dns --alsologtostderr -v=1: (1.632644934s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-482679 addons disable ingress --alsologtostderr -v=1: (8.098223371s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-482679
helpers_test.go:235: (dbg) docker inspect addons-482679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c",
	        "Created": "2024-03-27T23:56:48.813518542Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1958426,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-27T23:56:49.067602775Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f9b5358e8c18dbe49e632154cad75e0968b2e103f621caff2c3ed996f4155861",
	        "ResolvConfPath": "/var/lib/docker/containers/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c/hostname",
	        "HostsPath": "/var/lib/docker/containers/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c/hosts",
	        "LogPath": "/var/lib/docker/containers/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c-json.log",
	        "Name": "/addons-482679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-482679:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-482679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9aa8ed8b93287c234ae3b78023e0f7c5e3b43641dec84ad9d090c99c8fb41740-init/diff:/var/lib/docker/overlay2/07f877cb7d661b8e8bf24e390c9cea61396c20d4f4c8c6395f4b5d699fc104ea/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9aa8ed8b93287c234ae3b78023e0f7c5e3b43641dec84ad9d090c99c8fb41740/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9aa8ed8b93287c234ae3b78023e0f7c5e3b43641dec84ad9d090c99c8fb41740/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9aa8ed8b93287c234ae3b78023e0f7c5e3b43641dec84ad9d090c99c8fb41740/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-482679",
	                "Source": "/var/lib/docker/volumes/addons-482679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-482679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-482679",
	                "name.minikube.sigs.k8s.io": "addons-482679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74d9a91cc060cfa9326d2e400f0bb3c3b25e70aac9db767a505adeb7ab3918a5",
	            "SandboxKey": "/var/run/docker/netns/74d9a91cc060",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35039"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35038"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35035"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35037"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35036"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-482679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "79ac8fcdbbc0a391be86ab0bec214c18d164730c8da1a82ae7b163a5da0a41d5",
	                    "EndpointID": "681fdcfee3fd4ed4a68b2d0d61718f8a427a943f534e277f98419cdb612de316",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-482679",
	                        "73a498ce008b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-482679 -n addons-482679
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-482679 logs -n 25: (1.458137444s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-223414              | download-only-223414   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| start   | -o=json --download-only              | download-only-984922   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | -p download-only-984922              |                        |         |                |                     |                     |
	|         | --force --alsologtostderr            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0  |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-984922              | download-only-984922   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-136920              | download-only-136920   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-223414              | download-only-223414   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-984922              | download-only-984922   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| start   | --download-only -p                   | download-docker-147301 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | download-docker-147301               |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p download-docker-147301            | download-docker-147301 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| start   | --download-only -p                   | binary-mirror-677516   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | binary-mirror-677516                 |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --binary-mirror                      |                        |         |                |                     |                     |
	|         | http://127.0.0.1:43099               |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-677516              | binary-mirror-677516   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| addons  | disable dashboard -p                 | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | addons-482679                        |                        |         |                |                     |                     |
	| addons  | enable dashboard -p                  | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | addons-482679                        |                        |         |                |                     |                     |
	| start   | -p addons-482679 --wait=true         | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:58 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |                |                     |                     |
	|         | --addons=registry                    |                        |         |                |                     |                     |
	|         | --addons=metrics-server              |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --addons=ingress                     |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |                |                     |                     |
	| ip      | addons-482679 ip                     | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	| addons  | addons-482679 addons disable         | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | registry --alsologtostderr           |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-482679 addons                 | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | disable metrics-server               |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | addons-482679                        |                        |         |                |                     |                     |
	| ssh     | addons-482679 ssh curl -s            | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |                |                     |                     |
	|         | nginx.example.com'                   |                        |         |                |                     |                     |
	| ip      | addons-482679 ip                     | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	| addons  | addons-482679 addons disable         | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-482679 addons                 | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | disable csi-hostpath-driver          |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | addons-482679 addons disable         | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |                |                     |                     |
	| addons  | addons-482679 addons                 | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | disable volumesnapshots              |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:56:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:56:25.103821 1957985 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:56:25.104003 1957985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:56:25.104032 1957985 out.go:304] Setting ErrFile to fd 2...
	I0327 23:56:25.104039 1957985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:56:25.104334 1957985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0327 23:56:25.104862 1957985 out.go:298] Setting JSON to false
	I0327 23:56:25.105836 1957985 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27523,"bootTime":1711556262,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 23:56:25.105942 1957985 start.go:139] virtualization:  
	I0327 23:56:25.138838 1957985 out.go:177] * [addons-482679] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 23:56:25.165848 1957985 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 23:56:25.165949 1957985 notify.go:220] Checking for updates...
	I0327 23:56:25.234606 1957985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:56:25.267060 1957985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0327 23:56:25.298114 1957985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0327 23:56:25.330529 1957985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 23:56:25.347615 1957985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:56:25.374797 1957985 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:56:25.392412 1957985 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 23:56:25.392534 1957985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:56:25.449949 1957985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 23:56:25.43958026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:56:25.450091 1957985 docker.go:295] overlay module found
	I0327 23:56:25.491215 1957985 out.go:177] * Using the docker driver based on user configuration
	I0327 23:56:25.523602 1957985 start.go:297] selected driver: docker
	I0327 23:56:25.523629 1957985 start.go:901] validating driver "docker" against <nil>
	I0327 23:56:25.523646 1957985 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:56:25.524298 1957985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:56:25.580901 1957985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 23:56:25.571893909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:56:25.581076 1957985 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:56:25.581343 1957985 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:56:25.619352 1957985 out.go:177] * Using Docker driver with root privileges
	I0327 23:56:25.651401 1957985 cni.go:84] Creating CNI manager for ""
	I0327 23:56:25.651440 1957985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:56:25.651451 1957985 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 23:56:25.651544 1957985 start.go:340] cluster config:
	{Name:addons-482679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:56:25.684283 1957985 out.go:177] * Starting "addons-482679" primary control-plane node in "addons-482679" cluster
	I0327 23:56:25.711895 1957985 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 23:56:25.741467 1957985 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 23:56:25.764392 1957985 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 23:56:25.764456 1957985 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0327 23:56:25.764466 1957985 cache.go:56] Caching tarball of preloaded images
	I0327 23:56:25.764586 1957985 preload.go:173] Found /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 23:56:25.764596 1957985 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0327 23:56:25.764921 1957985 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 23:56:25.764936 1957985 profile.go:142] Saving config to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/config.json ...
	I0327 23:56:25.764966 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/config.json: {Name:mk6d358cd17889935dbc60afe70eb10b4aa4e09d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:25.777611 1957985 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 23:56:25.777731 1957985 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 23:56:25.777756 1957985 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 23:56:25.777762 1957985 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 23:56:25.777777 1957985 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 23:56:25.777794 1957985 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 from local cache
	I0327 23:56:41.827268 1957985 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 from cached tarball
	I0327 23:56:41.827309 1957985 cache.go:194] Successfully downloaded all kic artifacts
	I0327 23:56:41.827339 1957985 start.go:360] acquireMachinesLock for addons-482679: {Name:mkfd4bcc4f7e46622616fd324fcf1ee8bd5a31ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:56:41.827991 1957985 start.go:364] duration metric: took 621.813µs to acquireMachinesLock for "addons-482679"
	I0327 23:56:41.828024 1957985 start.go:93] Provisioning new machine with config: &{Name:addons-482679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 23:56:41.828134 1957985 start.go:125] createHost starting for "" (driver="docker")
	I0327 23:56:41.830088 1957985 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0327 23:56:41.830326 1957985 start.go:159] libmachine.API.Create for "addons-482679" (driver="docker")
	I0327 23:56:41.830357 1957985 client.go:168] LocalClient.Create starting
	I0327 23:56:41.830457 1957985 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem
	I0327 23:56:42.188094 1957985 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem
	I0327 23:56:42.414264 1957985 cli_runner.go:164] Run: docker network inspect addons-482679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0327 23:56:42.426989 1957985 cli_runner.go:211] docker network inspect addons-482679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0327 23:56:42.427074 1957985 network_create.go:281] running [docker network inspect addons-482679] to gather additional debugging logs...
	I0327 23:56:42.427094 1957985 cli_runner.go:164] Run: docker network inspect addons-482679
	W0327 23:56:42.440650 1957985 cli_runner.go:211] docker network inspect addons-482679 returned with exit code 1
	I0327 23:56:42.440683 1957985 network_create.go:284] error running [docker network inspect addons-482679]: docker network inspect addons-482679: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-482679 not found
	I0327 23:56:42.440697 1957985 network_create.go:286] output of [docker network inspect addons-482679]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-482679 not found
	
	** /stderr **
	I0327 23:56:42.440814 1957985 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 23:56:42.455145 1957985 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025c0170}
	I0327 23:56:42.455187 1957985 network_create.go:124] attempt to create docker network addons-482679 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0327 23:56:42.455250 1957985 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-482679 addons-482679
	I0327 23:56:42.512310 1957985 network_create.go:108] docker network addons-482679 192.168.49.0/24 created
	I0327 23:56:42.512344 1957985 kic.go:121] calculated static IP "192.168.49.2" for the "addons-482679" container
	I0327 23:56:42.512418 1957985 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0327 23:56:42.524433 1957985 cli_runner.go:164] Run: docker volume create addons-482679 --label name.minikube.sigs.k8s.io=addons-482679 --label created_by.minikube.sigs.k8s.io=true
	I0327 23:56:42.538485 1957985 oci.go:103] Successfully created a docker volume addons-482679
	I0327 23:56:42.538585 1957985 cli_runner.go:164] Run: docker run --rm --name addons-482679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-482679 --entrypoint /usr/bin/test -v addons-482679:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib
	I0327 23:56:44.455615 1957985 cli_runner.go:217] Completed: docker run --rm --name addons-482679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-482679 --entrypoint /usr/bin/test -v addons-482679:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib: (1.916970324s)
	I0327 23:56:44.455645 1957985 oci.go:107] Successfully prepared a docker volume addons-482679
	I0327 23:56:44.455679 1957985 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 23:56:44.455704 1957985 kic.go:194] Starting extracting preloaded images to volume ...
	I0327 23:56:44.455784 1957985 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-482679:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0327 23:56:48.743861 1957985 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-482679:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.288038637s)
	I0327 23:56:48.743893 1957985 kic.go:203] duration metric: took 4.288185959s to extract preloaded images to volume ...
	W0327 23:56:48.744043 1957985 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0327 23:56:48.744180 1957985 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0327 23:56:48.800638 1957985 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-482679 --name addons-482679 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-482679 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-482679 --network addons-482679 --ip 192.168.49.2 --volume addons-482679:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8
	I0327 23:56:49.076363 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Running}}
	I0327 23:56:49.091738 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:56:49.113582 1957985 cli_runner.go:164] Run: docker exec addons-482679 stat /var/lib/dpkg/alternatives/iptables
	I0327 23:56:49.194587 1957985 oci.go:144] the created container "addons-482679" has a running status.
	I0327 23:56:49.194619 1957985 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa...
	I0327 23:56:49.915884 1957985 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0327 23:56:49.939062 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:56:49.959760 1957985 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0327 23:56:49.959779 1957985 kic_runner.go:114] Args: [docker exec --privileged addons-482679 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0327 23:56:50.010480 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:56:50.033077 1957985 machine.go:94] provisionDockerMachine start ...
	I0327 23:56:50.033188 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:50.050825 1957985 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:50.051092 1957985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35039 <nil> <nil>}
	I0327 23:56:50.051101 1957985 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 23:56:50.179391 1957985 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-482679
	
	I0327 23:56:50.179473 1957985 ubuntu.go:169] provisioning hostname "addons-482679"
	I0327 23:56:50.179577 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:50.198186 1957985 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:50.198427 1957985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35039 <nil> <nil>}
	I0327 23:56:50.198439 1957985 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-482679 && echo "addons-482679" | sudo tee /etc/hostname
	I0327 23:56:50.345936 1957985 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-482679
	
	I0327 23:56:50.346065 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:50.361171 1957985 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:50.361414 1957985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35039 <nil> <nil>}
	I0327 23:56:50.361431 1957985 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-482679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-482679/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-482679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:56:50.482094 1957985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:56:50.482122 1957985 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18158-1951721/.minikube CaCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18158-1951721/.minikube}
	I0327 23:56:50.482147 1957985 ubuntu.go:177] setting up certificates
	I0327 23:56:50.482156 1957985 provision.go:84] configureAuth start
	I0327 23:56:50.482215 1957985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-482679
	I0327 23:56:50.497392 1957985 provision.go:143] copyHostCerts
	I0327 23:56:50.497481 1957985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.pem (1078 bytes)
	I0327 23:56:50.497598 1957985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/cert.pem (1123 bytes)
	I0327 23:56:50.497653 1957985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/key.pem (1675 bytes)
	I0327 23:56:50.497695 1957985 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem org=jenkins.addons-482679 san=[127.0.0.1 192.168.49.2 addons-482679 localhost minikube]
	I0327 23:56:50.919058 1957985 provision.go:177] copyRemoteCerts
	I0327 23:56:50.919153 1957985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:56:50.919217 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:50.933780 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.023177 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:56:51.047902 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 23:56:51.072742 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 23:56:51.097514 1957985 provision.go:87] duration metric: took 615.344563ms to configureAuth
	I0327 23:56:51.097541 1957985 ubuntu.go:193] setting minikube options for container-runtime
	I0327 23:56:51.097730 1957985 config.go:182] Loaded profile config "addons-482679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 23:56:51.097745 1957985 machine.go:97] duration metric: took 1.064649964s to provisionDockerMachine
	I0327 23:56:51.097752 1957985 client.go:171] duration metric: took 9.26738986s to LocalClient.Create
	I0327 23:56:51.097767 1957985 start.go:167] duration metric: took 9.267441634s to libmachine.API.Create "addons-482679"
	I0327 23:56:51.097778 1957985 start.go:293] postStartSetup for "addons-482679" (driver="docker")
	I0327 23:56:51.097788 1957985 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:56:51.097840 1957985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:56:51.097887 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:51.114033 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.203387 1957985 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:56:51.206671 1957985 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 23:56:51.206748 1957985 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 23:56:51.206767 1957985 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 23:56:51.206775 1957985 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0327 23:56:51.206785 1957985 filesync.go:126] Scanning /home/jenkins/minikube-integration/18158-1951721/.minikube/addons for local assets ...
	I0327 23:56:51.206851 1957985 filesync.go:126] Scanning /home/jenkins/minikube-integration/18158-1951721/.minikube/files for local assets ...
	I0327 23:56:51.206881 1957985 start.go:296] duration metric: took 109.097092ms for postStartSetup
	I0327 23:56:51.207185 1957985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-482679
	I0327 23:56:51.223144 1957985 profile.go:142] Saving config to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/config.json ...
	I0327 23:56:51.223465 1957985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 23:56:51.223520 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:51.238484 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.322517 1957985 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0327 23:56:51.326979 1957985 start.go:128] duration metric: took 9.498827247s to createHost
	I0327 23:56:51.327001 1957985 start.go:83] releasing machines lock for "addons-482679", held for 9.498995614s
	I0327 23:56:51.327076 1957985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-482679
	I0327 23:56:51.342497 1957985 ssh_runner.go:195] Run: cat /version.json
	I0327 23:56:51.342550 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:51.342552 1957985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:56:51.342613 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:51.363841 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.377847 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.453155 1957985 ssh_runner.go:195] Run: systemctl --version
	I0327 23:56:51.567261 1957985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 23:56:51.571697 1957985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0327 23:56:51.597057 1957985 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0327 23:56:51.597133 1957985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:56:51.625487 1957985 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0327 23:56:51.625511 1957985 start.go:494] detecting cgroup driver to use...
	I0327 23:56:51.625546 1957985 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 23:56:51.625600 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 23:56:51.637764 1957985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 23:56:51.648648 1957985 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:56:51.648716 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:56:51.662554 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:56:51.677171 1957985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:56:51.764995 1957985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:56:51.859743 1957985 docker.go:233] disabling docker service ...
	I0327 23:56:51.859841 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:56:51.879357 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:56:51.892189 1957985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:56:51.982747 1957985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:56:52.079929 1957985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:56:52.091193 1957985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:56:52.107279 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0327 23:56:52.116873 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 23:56:52.126418 1957985 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 23:56:52.126487 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 23:56:52.135948 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 23:56:52.145560 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 23:56:52.155477 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 23:56:52.165979 1957985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:56:52.175366 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 23:56:52.186689 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 23:56:52.197012 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 23:56:52.206702 1957985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:56:52.215358 1957985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:56:52.223936 1957985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:52.312907 1957985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 23:56:52.435767 1957985 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0327 23:56:52.435891 1957985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0327 23:56:52.439539 1957985 start.go:562] Will wait 60s for crictl version
	I0327 23:56:52.439628 1957985 ssh_runner.go:195] Run: which crictl
	I0327 23:56:52.442967 1957985 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:56:52.484243 1957985 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0327 23:56:52.484356 1957985 ssh_runner.go:195] Run: containerd --version
	I0327 23:56:52.506760 1957985 ssh_runner.go:195] Run: containerd --version
	I0327 23:56:52.529493 1957985 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0327 23:56:52.531358 1957985 cli_runner.go:164] Run: docker network inspect addons-482679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 23:56:52.544026 1957985 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0327 23:56:52.547499 1957985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:56:52.558043 1957985 kubeadm.go:877] updating cluster {Name:addons-482679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 23:56:52.558170 1957985 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 23:56:52.558243 1957985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:56:52.594366 1957985 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 23:56:52.594392 1957985 containerd.go:534] Images already preloaded, skipping extraction
	I0327 23:56:52.594454 1957985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:56:52.631946 1957985 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 23:56:52.631970 1957985 cache_images.go:84] Images are preloaded, skipping loading
	I0327 23:56:52.631978 1957985 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 containerd true true} ...
	I0327 23:56:52.632126 1957985 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-482679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:56:52.632206 1957985 ssh_runner.go:195] Run: sudo crictl info
	I0327 23:56:52.668764 1957985 cni.go:84] Creating CNI manager for ""
	I0327 23:56:52.668792 1957985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:56:52.668803 1957985 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 23:56:52.668854 1957985 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-482679 NodeName:addons-482679 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 23:56:52.669024 1957985 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-482679"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 23:56:52.669094 1957985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:56:52.678213 1957985 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 23:56:52.678293 1957985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 23:56:52.687206 1957985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0327 23:56:52.704717 1957985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:56:52.721627 1957985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0327 23:56:52.739076 1957985 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0327 23:56:52.742260 1957985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:56:52.752455 1957985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:52.837717 1957985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:56:52.853056 1957985 certs.go:68] Setting up /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679 for IP: 192.168.49.2
	I0327 23:56:52.853082 1957985 certs.go:194] generating shared ca certs ...
	I0327 23:56:52.853100 1957985 certs.go:226] acquiring lock for ca certs: {Name:mka210db6b2adfd3b9800e3583e6835c01f5e440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:52.853233 1957985 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key
	I0327 23:56:53.716142 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt ...
	I0327 23:56:53.716202 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt: {Name:mkcc6ede1578f5a347de6ffe474ab99de5073d8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:53.716424 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key ...
	I0327 23:56:53.716458 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key: {Name:mk8f857f2dd2c1d455c5a90020547b011d8eec6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:53.717038 1957985 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key
	I0327 23:56:53.974152 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.crt ...
	I0327 23:56:53.974186 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.crt: {Name:mk3b771debd68a5cc735024221bb41091d98eece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:53.974387 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key ...
	I0327 23:56:53.974401 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key: {Name:mk3cccf9e1b55e3e06259f7782a53a8437a8471d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:53.974958 1957985 certs.go:256] generating profile certs ...
	I0327 23:56:53.975021 1957985 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.key
	I0327 23:56:53.975047 1957985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt with IP's: []
	I0327 23:56:54.650005 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt ...
	I0327 23:56:54.650036 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: {Name:mk72e30acf2bf9a7152dc3fbf5db1dd30cbec821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:54.650223 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.key ...
	I0327 23:56:54.650237 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.key: {Name:mk23530e1a3f8c4c266f8dbc3412ebc17297e558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:54.650857 1957985 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key.a47cdf3f
	I0327 23:56:54.650881 1957985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt.a47cdf3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0327 23:56:55.109785 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt.a47cdf3f ...
	I0327 23:56:55.109819 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt.a47cdf3f: {Name:mkdb5d7599b3b845f49ee9d8ca83968b8a936680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:55.110685 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key.a47cdf3f ...
	I0327 23:56:55.110709 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key.a47cdf3f: {Name:mkcf6dc97e40955d49dfca8aa8d37604bc67f21b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:55.111331 1957985 certs.go:381] copying /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt.a47cdf3f -> /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt
	I0327 23:56:55.111436 1957985 certs.go:385] copying /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key.a47cdf3f -> /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key
	I0327 23:56:55.111499 1957985 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.key
	I0327 23:56:55.111522 1957985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.crt with IP's: []
	I0327 23:56:55.681366 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.crt ...
	I0327 23:56:55.681398 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.crt: {Name:mkd455035484f5918f196672dd737af39c96f54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:55.681587 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.key ...
	I0327 23:56:55.681601 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.key: {Name:mkd1b6a6029d28676002d627bff2049924cd310c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:55.682465 1957985 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 23:56:55.682516 1957985 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:56:55.682541 1957985 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:56:55.682578 1957985 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem (1675 bytes)
	I0327 23:56:55.683216 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:56:55.707109 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:56:55.730560 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:56:55.754363 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:56:55.777308 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0327 23:56:55.801373 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 23:56:55.824193 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:56:55.847847 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 23:56:55.871080 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:56:55.895885 1957985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 23:56:55.913679 1957985 ssh_runner.go:195] Run: openssl version
	I0327 23:56:55.919049 1957985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:56:55.928642 1957985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:55.932036 1957985 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:55.932126 1957985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:55.939751 1957985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:56:55.949991 1957985 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:56:55.954371 1957985 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:56:55.954459 1957985 kubeadm.go:391] StartCluster: {Name:addons-482679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:56:55.954558 1957985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0327 23:56:55.954659 1957985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 23:56:55.996574 1957985 cri.go:89] found id: ""
	I0327 23:56:55.996694 1957985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 23:56:56.010704 1957985 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 23:56:56.019982 1957985 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0327 23:56:56.020112 1957985 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 23:56:56.029323 1957985 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 23:56:56.029349 1957985 kubeadm.go:156] found existing configuration files:
	
	I0327 23:56:56.029413 1957985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 23:56:56.038585 1957985 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 23:56:56.038655 1957985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 23:56:56.046826 1957985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 23:56:56.055554 1957985 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 23:56:56.055670 1957985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 23:56:56.064647 1957985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 23:56:56.073852 1957985 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 23:56:56.073940 1957985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 23:56:56.082254 1957985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 23:56:56.091042 1957985 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 23:56:56.091112 1957985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 23:56:56.099561 1957985 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0327 23:56:56.148754 1957985 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 23:56:56.149071 1957985 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 23:56:56.191118 1957985 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0327 23:56:56.191190 1957985 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0327 23:56:56.191228 1957985 kubeadm.go:309] OS: Linux
	I0327 23:56:56.191278 1957985 kubeadm.go:309] CGROUPS_CPU: enabled
	I0327 23:56:56.191334 1957985 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0327 23:56:56.191385 1957985 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0327 23:56:56.191435 1957985 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0327 23:56:56.191487 1957985 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0327 23:56:56.191538 1957985 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0327 23:56:56.191585 1957985 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0327 23:56:56.191635 1957985 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0327 23:56:56.191682 1957985 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0327 23:56:56.258976 1957985 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 23:56:56.259112 1957985 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 23:56:56.259221 1957985 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 23:56:56.481722 1957985 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 23:56:56.487092 1957985 out.go:204]   - Generating certificates and keys ...
	I0327 23:56:56.487234 1957985 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 23:56:56.487328 1957985 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 23:56:57.025787 1957985 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 23:56:57.183213 1957985 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 23:56:57.535410 1957985 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 23:56:57.822464 1957985 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 23:56:58.916031 1957985 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 23:56:58.916353 1957985 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-482679 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0327 23:56:59.740311 1957985 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 23:56:59.740860 1957985 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-482679 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0327 23:56:59.924844 1957985 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 23:57:00.981679 1957985 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 23:57:01.969137 1957985 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 23:57:01.969689 1957985 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 23:57:02.337555 1957985 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 23:57:03.673618 1957985 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 23:57:03.991167 1957985 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 23:57:04.751867 1957985 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 23:57:05.465821 1957985 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 23:57:05.466459 1957985 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 23:57:05.470899 1957985 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 23:57:05.473351 1957985 out.go:204]   - Booting up control plane ...
	I0327 23:57:05.473448 1957985 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 23:57:05.473524 1957985 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 23:57:05.473972 1957985 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 23:57:05.484500 1957985 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 23:57:05.485374 1957985 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 23:57:05.485637 1957985 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 23:57:05.585703 1957985 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 23:57:12.593083 1957985 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.007479 seconds
	I0327 23:57:12.612691 1957985 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 23:57:12.627285 1957985 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 23:57:13.155418 1957985 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 23:57:13.155863 1957985 kubeadm.go:309] [mark-control-plane] Marking the node addons-482679 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 23:57:13.667678 1957985 kubeadm.go:309] [bootstrap-token] Using token: vie3po.0q6g65kpvcijyxvl
	I0327 23:57:13.669393 1957985 out.go:204]   - Configuring RBAC rules ...
	I0327 23:57:13.669509 1957985 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 23:57:13.674654 1957985 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 23:57:13.683741 1957985 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 23:57:13.687762 1957985 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 23:57:13.691573 1957985 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 23:57:13.695358 1957985 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 23:57:13.708874 1957985 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 23:57:13.915329 1957985 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 23:57:14.080675 1957985 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 23:57:14.082142 1957985 kubeadm.go:309] 
	I0327 23:57:14.082260 1957985 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 23:57:14.082278 1957985 kubeadm.go:309] 
	I0327 23:57:14.082353 1957985 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 23:57:14.082368 1957985 kubeadm.go:309] 
	I0327 23:57:14.082401 1957985 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 23:57:14.082568 1957985 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 23:57:14.082628 1957985 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 23:57:14.082638 1957985 kubeadm.go:309] 
	I0327 23:57:14.082691 1957985 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 23:57:14.082700 1957985 kubeadm.go:309] 
	I0327 23:57:14.082746 1957985 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 23:57:14.082753 1957985 kubeadm.go:309] 
	I0327 23:57:14.082804 1957985 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 23:57:14.082890 1957985 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 23:57:14.082961 1957985 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 23:57:14.082971 1957985 kubeadm.go:309] 
	I0327 23:57:14.083053 1957985 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 23:57:14.083131 1957985 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 23:57:14.083140 1957985 kubeadm.go:309] 
	I0327 23:57:14.083222 1957985 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vie3po.0q6g65kpvcijyxvl \
	I0327 23:57:14.083326 1957985 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3196aff351ef2c125525871fc87785f4c92e902d09ef7d39825c387aa49fa380 \
	I0327 23:57:14.083351 1957985 kubeadm.go:309] 	--control-plane 
	I0327 23:57:14.083368 1957985 kubeadm.go:309] 
	I0327 23:57:14.083564 1957985 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 23:57:14.083576 1957985 kubeadm.go:309] 
	I0327 23:57:14.083659 1957985 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vie3po.0q6g65kpvcijyxvl \
	I0327 23:57:14.083764 1957985 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3196aff351ef2c125525871fc87785f4c92e902d09ef7d39825c387aa49fa380 
	I0327 23:57:14.086985 1957985 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0327 23:57:14.087199 1957985 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 23:57:14.087225 1957985 cni.go:84] Creating CNI manager for ""
	I0327 23:57:14.087248 1957985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:57:14.090370 1957985 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0327 23:57:14.091804 1957985 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0327 23:57:14.098009 1957985 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0327 23:57:14.098034 1957985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0327 23:57:14.137062 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0327 23:57:14.464355 1957985 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 23:57:14.464488 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:14.464600 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-482679 minikube.k8s.io/updated_at=2024_03_27T23_57_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873 minikube.k8s.io/name=addons-482679 minikube.k8s.io/primary=true
	I0327 23:57:14.609612 1957985 ops.go:34] apiserver oom_adj: -16
	I0327 23:57:14.609746 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:15.110620 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:15.610381 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:16.110601 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:16.609933 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:17.110417 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:17.609896 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:18.110012 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:18.610643 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:19.109882 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:19.609848 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:20.110270 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:20.610704 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:21.110312 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:21.610685 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:22.110702 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:22.610070 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:23.110373 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:23.610499 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:24.110242 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:24.610184 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:25.110857 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:25.609827 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:26.110379 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:26.610209 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:27.110642 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:27.222458 1957985 kubeadm.go:1107] duration metric: took 12.758015578s to wait for elevateKubeSystemPrivileges
	W0327 23:57:27.222489 1957985 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 23:57:27.222496 1957985 kubeadm.go:393] duration metric: took 31.268068211s to StartCluster
	I0327 23:57:27.222512 1957985 settings.go:142] acquiring lock: {Name:mk8bd0eb5f984b7df18eb5fe3af15aec887e343a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:57:27.222986 1957985 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0327 23:57:27.223372 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/kubeconfig: {Name:mk4e0e309c01b086d75fed1e6a33183905fae8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:57:27.223553 1957985 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 23:57:27.225347 1957985 out.go:177] * Verifying Kubernetes components...
	I0327 23:57:27.223633 1957985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 23:57:27.223794 1957985 config.go:182] Loaded profile config "addons-482679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 23:57:27.223802 1957985 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0327 23:57:27.227241 1957985 addons.go:69] Setting yakd=true in profile "addons-482679"
	I0327 23:57:27.227271 1957985 addons.go:234] Setting addon yakd=true in "addons-482679"
	I0327 23:57:27.227301 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.227753 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.227911 1957985 addons.go:69] Setting ingress=true in profile "addons-482679"
	I0327 23:57:27.227931 1957985 addons.go:234] Setting addon ingress=true in "addons-482679"
	I0327 23:57:27.227958 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.228349 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.229460 1957985 addons.go:69] Setting ingress-dns=true in profile "addons-482679"
	I0327 23:57:27.229492 1957985 addons.go:234] Setting addon ingress-dns=true in "addons-482679"
	I0327 23:57:27.229522 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.229969 1957985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:57:27.230161 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.232584 1957985 addons.go:69] Setting cloud-spanner=true in profile "addons-482679"
	I0327 23:57:27.232758 1957985 addons.go:234] Setting addon cloud-spanner=true in "addons-482679"
	I0327 23:57:27.232889 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.233798 1957985 addons.go:69] Setting inspektor-gadget=true in profile "addons-482679"
	I0327 23:57:27.233822 1957985 addons.go:234] Setting addon inspektor-gadget=true in "addons-482679"
	I0327 23:57:27.233848 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.234398 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.234928 1957985 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-482679"
	I0327 23:57:27.234995 1957985 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-482679"
	I0327 23:57:27.235033 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.244245 1957985 addons.go:69] Setting metrics-server=true in profile "addons-482679"
	I0327 23:57:27.244376 1957985 addons.go:234] Setting addon metrics-server=true in "addons-482679"
	I0327 23:57:27.244451 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.245043 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.245317 1957985 addons.go:69] Setting default-storageclass=true in profile "addons-482679"
	I0327 23:57:27.245369 1957985 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-482679"
	I0327 23:57:27.245641 1957985 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-482679"
	I0327 23:57:27.245683 1957985 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-482679"
	I0327 23:57:27.245734 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.255223 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.255551 1957985 addons.go:69] Setting gcp-auth=true in profile "addons-482679"
	I0327 23:57:27.255640 1957985 mustload.go:65] Loading cluster: addons-482679
	I0327 23:57:27.262041 1957985 config.go:182] Loaded profile config "addons-482679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 23:57:27.262447 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.268619 1957985 addons.go:69] Setting registry=true in profile "addons-482679"
	I0327 23:57:27.268655 1957985 addons.go:234] Setting addon registry=true in "addons-482679"
	I0327 23:57:27.268696 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.269113 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.256300 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.285461 1957985 addons.go:69] Setting storage-provisioner=true in profile "addons-482679"
	I0327 23:57:27.285493 1957985 addons.go:234] Setting addon storage-provisioner=true in "addons-482679"
	I0327 23:57:27.285530 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.256826 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.310242 1957985 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-482679"
	I0327 23:57:27.310320 1957985 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-482679"
	I0327 23:57:27.284037 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.316817 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.336780 1957985 addons.go:69] Setting volumesnapshots=true in profile "addons-482679"
	I0327 23:57:27.336874 1957985 addons.go:234] Setting addon volumesnapshots=true in "addons-482679"
	I0327 23:57:27.336941 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.337453 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.344804 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.379758 1957985 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:57:27.382116 1957985 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0327 23:57:27.383778 1957985 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:57:27.388324 1957985 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 23:57:27.388346 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0327 23:57:27.388412 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.395062 1957985 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0327 23:57:27.401554 1957985 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0327 23:57:27.431014 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0327 23:57:27.431038 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0327 23:57:27.431106 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.457358 1957985 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0327 23:57:27.472209 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0327 23:57:27.420936 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0327 23:57:27.420949 1957985 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0327 23:57:27.465953 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.467412 1957985 addons.go:234] Setting addon default-storageclass=true in "addons-482679"
	I0327 23:57:27.472184 1957985 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0327 23:57:27.478334 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0327 23:57:27.481667 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.481680 1957985 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0327 23:57:27.481690 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0327 23:57:27.486141 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0327 23:57:27.486216 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.488415 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.488511 1957985 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 23:57:27.488517 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0327 23:57:27.488546 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.490008 1957985 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0327 23:57:27.490014 1957985 out.go:177]   - Using image docker.io/registry:2.8.3
	I0327 23:57:27.511080 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0327 23:57:27.514086 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0327 23:57:27.511303 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0327 23:57:27.497431 1957985 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 23:57:27.517546 1957985 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-482679"
	I0327 23:57:27.518156 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.520419 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 23:57:27.520432 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 23:57:27.523347 1957985 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 23:57:27.523447 1957985 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 23:57:27.523503 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.526703 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0327 23:57:27.528034 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0327 23:57:27.528619 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.531201 1957985 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0327 23:57:27.535322 1957985 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0327 23:57:27.535341 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0327 23:57:27.535405 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.533628 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.542156 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.550776 1957985 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:57:27.550797 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 23:57:27.550860 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.528246 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0327 23:57:27.558100 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.572874 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0327 23:57:27.583602 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0327 23:57:27.585574 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0327 23:57:27.528226 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.590264 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 23:57:27.590298 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0327 23:57:27.590387 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.611068 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.612223 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.659870 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.699710 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.710290 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.741307 1957985 out.go:177]   - Using image docker.io/busybox:stable
	I0327 23:57:27.739006 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.745757 1957985 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0327 23:57:27.744789 1957985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 23:57:27.744840 1957985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:57:27.746467 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.748329 1957985 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 23:57:27.748564 1957985 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 23:57:27.748578 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 23:57:27.748682 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.749971 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0327 23:57:27.750062 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.754927 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.757622 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.772715 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.805655 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.807219 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.934291 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0327 23:57:27.934315 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0327 23:57:27.978178 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 23:57:28.001946 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 23:57:28.001973 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0327 23:57:28.100684 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 23:57:28.144313 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0327 23:57:28.144341 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0327 23:57:28.171939 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0327 23:57:28.180074 1957985 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 23:57:28.180153 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0327 23:57:28.182570 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 23:57:28.191360 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0327 23:57:28.191422 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0327 23:57:28.217475 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 23:57:28.221210 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:57:28.257804 1957985 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0327 23:57:28.257885 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0327 23:57:28.296301 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 23:57:28.296382 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0327 23:57:28.325749 1957985 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 23:57:28.325820 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0327 23:57:28.395506 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0327 23:57:28.395585 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0327 23:57:28.416043 1957985 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 23:57:28.416141 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0327 23:57:28.453510 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 23:57:28.463451 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 23:57:28.463521 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0327 23:57:28.625661 1957985 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0327 23:57:28.625723 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0327 23:57:28.644932 1957985 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 23:57:28.644955 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 23:57:28.710615 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0327 23:57:28.715727 1957985 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 23:57:28.715795 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0327 23:57:28.739619 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 23:57:28.739685 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0327 23:57:28.774006 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0327 23:57:28.774075 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0327 23:57:28.815095 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 23:57:28.815171 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0327 23:57:28.864647 1957985 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 23:57:28.864719 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 23:57:28.884574 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 23:57:28.884636 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0327 23:57:29.088013 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 23:57:29.088095 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0327 23:57:29.115008 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 23:57:29.115081 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0327 23:57:29.132340 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0327 23:57:29.354268 1957985 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:57:29.354342 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0327 23:57:29.356980 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 23:57:29.359220 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 23:57:29.359290 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0327 23:57:29.513392 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0327 23:57:29.513465 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0327 23:57:29.532342 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:57:29.624190 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 23:57:29.624265 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0327 23:57:29.776965 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 23:57:29.777040 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0327 23:57:29.848984 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 23:57:29.849056 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0327 23:57:30.015005 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 23:57:30.015086 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0327 23:57:30.043765 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 23:57:30.145147 1957985 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.396759052s)
	I0327 23:57:30.145442 1957985 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.397402715s)
	I0327 23:57:30.145600 1957985 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0327 23:57:30.147537 1957985 node_ready.go:35] waiting up to 6m0s for node "addons-482679" to be "Ready" ...
	I0327 23:57:30.155281 1957985 node_ready.go:49] node "addons-482679" has status "Ready":"True"
	I0327 23:57:30.155314 1957985 node_ready.go:38] duration metric: took 7.683329ms for node "addons-482679" to be "Ready" ...
	I0327 23:57:30.155327 1957985 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:57:30.174255 1957985 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-g8vgw" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:30.275326 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 23:57:30.275389 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0327 23:57:30.505856 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 23:57:30.505939 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0327 23:57:30.671787 1957985 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-482679" context rescaled to 1 replicas
	I0327 23:57:30.916049 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 23:57:30.916128 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0327 23:57:31.300438 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 23:57:32.205497 1957985 pod_ready.go:102] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"False"
	I0327 23:57:34.495559 1957985 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0327 23:57:34.495723 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:34.518301 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:34.708805 1957985 pod_ready.go:102] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"False"
	I0327 23:57:34.841969 1957985 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0327 23:57:34.929385 1957985 addons.go:234] Setting addon gcp-auth=true in "addons-482679"
	I0327 23:57:34.929438 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:34.929869 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:34.952436 1957985 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0327 23:57:34.952492 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:34.974226 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:35.148895 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.170677415s)
	I0327 23:57:35.148943 1957985 addons.go:470] Verifying addon ingress=true in "addons-482679"
	I0327 23:57:35.161582 1957985 out.go:177] * Verifying ingress addon...
	I0327 23:57:35.149117 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.048404811s)
	I0327 23:57:35.149210 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.97714425s)
	I0327 23:57:35.149273 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.966646343s)
	I0327 23:57:35.149294 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.93174845s)
	I0327 23:57:35.149340 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.928060704s)
	I0327 23:57:35.149393 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.695801618s)
	I0327 23:57:35.149424 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.438747064s)
	I0327 23:57:35.149495 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.017079691s)
	I0327 23:57:35.149611 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.792556906s)
	I0327 23:57:35.149730 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.61731294s)
	I0327 23:57:35.149797 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.105961734s)
	I0327 23:57:35.178948 1957985 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0327 23:57:35.179338 1957985 addons.go:470] Verifying addon registry=true in "addons-482679"
	I0327 23:57:35.186783 1957985 out.go:177] * Verifying registry addon...
	I0327 23:57:35.179509 1957985 addons.go:470] Verifying addon metrics-server=true in "addons-482679"
	W0327 23:57:35.179558 1957985 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 23:57:35.198136 1957985 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0327 23:57:35.199666 1957985 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-482679 service yakd-dashboard -n yakd-dashboard
	
	I0327 23:57:35.199832 1957985 retry.go:31] will retry after 186.444737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 23:57:35.230717 1957985 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 23:57:35.231674 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:35.231640 1957985 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0327 23:57:35.231761 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0327 23:57:35.243224 1957985 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0327 23:57:35.388741 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:57:35.688376 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:35.705521 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:36.200757 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:36.210021 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:36.578515 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.27797928s)
	I0327 23:57:36.578550 1957985 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-482679"
	I0327 23:57:36.580497 1957985 out.go:177] * Verifying csi-hostpath-driver addon...
	I0327 23:57:36.578750 1957985 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.626287685s)
	I0327 23:57:36.582879 1957985 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0327 23:57:36.584923 1957985 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:57:36.587098 1957985 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0327 23:57:36.589588 1957985 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 23:57:36.589610 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0327 23:57:36.595735 1957985 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 23:57:36.595764 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:36.623828 1957985 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 23:57:36.623853 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0327 23:57:36.662811 1957985 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 23:57:36.662836 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0327 23:57:36.683540 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:36.704103 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:36.711522 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 23:57:37.088490 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:37.180690 1957985 pod_ready.go:102] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"False"
	I0327 23:57:37.184333 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:37.204926 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:37.219216 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.830422265s)
	I0327 23:57:37.618255 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:37.621379 1957985 addons.go:470] Verifying addon gcp-auth=true in "addons-482679"
	I0327 23:57:37.623385 1957985 out.go:177] * Verifying gcp-auth addon...
	I0327 23:57:37.627056 1957985 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0327 23:57:37.668288 1957985 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0327 23:57:37.668314 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:37.690236 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:37.706755 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:38.089726 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:38.133654 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:38.185878 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:38.204702 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:38.588421 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:38.630966 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:38.684055 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:38.704472 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:39.088427 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:39.131502 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:39.186694 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:39.187060 1957985 pod_ready.go:102] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"False"
	I0327 23:57:39.206210 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:39.588884 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:39.631666 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:39.681916 1957985 pod_ready.go:92] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.681942 1957985 pod_ready.go:81] duration metric: took 9.507605067s for pod "coredns-76f75df574-g8vgw" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.681955 1957985 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vjc9g" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.685961 1957985 pod_ready.go:97] error getting pod "coredns-76f75df574-vjc9g" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-vjc9g" not found
	I0327 23:57:39.685987 1957985 pod_ready.go:81] duration metric: took 4.025155ms for pod "coredns-76f75df574-vjc9g" in "kube-system" namespace to be "Ready" ...
	E0327 23:57:39.685997 1957985 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-vjc9g" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-vjc9g" not found
	I0327 23:57:39.686005 1957985 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.687655 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:39.692381 1957985 pod_ready.go:92] pod "etcd-addons-482679" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.692409 1957985 pod_ready.go:81] duration metric: took 6.396847ms for pod "etcd-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.692423 1957985 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.698268 1957985 pod_ready.go:92] pod "kube-apiserver-addons-482679" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.698294 1957985 pod_ready.go:81] duration metric: took 5.863165ms for pod "kube-apiserver-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.698304 1957985 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.707313 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:39.707824 1957985 pod_ready.go:92] pod "kube-controller-manager-addons-482679" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.707849 1957985 pod_ready.go:81] duration metric: took 9.537462ms for pod "kube-controller-manager-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.707860 1957985 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-27xjv" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.879488 1957985 pod_ready.go:92] pod "kube-proxy-27xjv" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.879514 1957985 pod_ready.go:81] duration metric: took 171.645903ms for pod "kube-proxy-27xjv" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.879527 1957985 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:40.089793 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:40.130935 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:40.184716 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:40.204805 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:40.278675 1957985 pod_ready.go:92] pod "kube-scheduler-addons-482679" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:40.278750 1957985 pod_ready.go:81] duration metric: took 399.213739ms for pod "kube-scheduler-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:40.278777 1957985 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mrhg6" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:40.589640 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:40.631421 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:40.678303 1957985 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-mrhg6" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:40.678330 1957985 pod_ready.go:81] duration metric: took 399.531497ms for pod "nvidia-device-plugin-daemonset-mrhg6" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:40.678340 1957985 pod_ready.go:38] duration metric: took 10.522998679s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:57:40.678357 1957985 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:57:40.678469 1957985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:57:40.683369 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:40.695240 1957985 api_server.go:72] duration metric: took 13.47165881s to wait for apiserver process to appear ...
	I0327 23:57:40.695266 1957985 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:57:40.695286 1957985 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 23:57:40.702817 1957985 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0327 23:57:40.703949 1957985 api_server.go:141] control plane version: v1.29.3
	I0327 23:57:40.703977 1957985 api_server.go:131] duration metric: took 8.703893ms to wait for apiserver health ...
	I0327 23:57:40.703987 1957985 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:57:40.707066 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:40.885191 1957985 system_pods.go:59] 18 kube-system pods found
	I0327 23:57:40.885229 1957985 system_pods.go:61] "coredns-76f75df574-g8vgw" [867c0aa5-1d07-4ff2-be4b-9ed7ce403871] Running
	I0327 23:57:40.885238 1957985 system_pods.go:61] "csi-hostpath-attacher-0" [daf424b4-f84b-4a6a-a011-4bfa22212b97] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 23:57:40.885247 1957985 system_pods.go:61] "csi-hostpath-resizer-0" [7f638b92-85f4-415f-a924-86650cfe8dfb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 23:57:40.885256 1957985 system_pods.go:61] "csi-hostpathplugin-6lzmh" [df3aa80c-fed2-47a2-b856-111e7de3128b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 23:57:40.885262 1957985 system_pods.go:61] "etcd-addons-482679" [f70527f9-6a15-4ab5-8778-24cee184984b] Running
	I0327 23:57:40.885267 1957985 system_pods.go:61] "kindnet-425ft" [54e31491-ad3f-4dab-8464-853f25f30101] Running
	I0327 23:57:40.885271 1957985 system_pods.go:61] "kube-apiserver-addons-482679" [1502c7af-0780-41fa-a22e-6c294015cca2] Running
	I0327 23:57:40.885283 1957985 system_pods.go:61] "kube-controller-manager-addons-482679" [7de1d5fa-2fc1-4214-8882-e6338a7d0b2c] Running
	I0327 23:57:40.885292 1957985 system_pods.go:61] "kube-ingress-dns-minikube" [ba1e77e0-74b3-4260-8083-f6d10de6cff7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0327 23:57:40.885303 1957985 system_pods.go:61] "kube-proxy-27xjv" [2251c29f-a21a-47c4-bbde-6205c6556081] Running
	I0327 23:57:40.885307 1957985 system_pods.go:61] "kube-scheduler-addons-482679" [6da7eaa9-257a-4896-afb6-860a4d96f8fe] Running
	I0327 23:57:40.885313 1957985 system_pods.go:61] "metrics-server-69cf46c98-txgn5" [e0e41c2b-b28d-474c-81f6-204fae8b58f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0327 23:57:40.885318 1957985 system_pods.go:61] "nvidia-device-plugin-daemonset-mrhg6" [aa83998b-3a9a-4746-abdb-f97f818000d6] Running
	I0327 23:57:40.885324 1957985 system_pods.go:61] "registry-c6js5" [f3baf2dc-389c-478c-8148-510e917e380b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0327 23:57:40.885330 1957985 system_pods.go:61] "registry-proxy-mgwql" [5d95c02e-cd9e-4a2f-8b0b-0e6e7f131536] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 23:57:40.885339 1957985 system_pods.go:61] "snapshot-controller-58dbcc7b99-j6jxb" [f5f560d1-85b1-4430-9933-76522bc5156a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:57:40.885347 1957985 system_pods.go:61] "snapshot-controller-58dbcc7b99-xf7gf" [90d05ce1-64eb-425c-a5c3-9cc16d6d5458] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:57:40.885356 1957985 system_pods.go:61] "storage-provisioner" [ceceb59c-5b5a-4651-83fe-54d165f371b0] Running
	I0327 23:57:40.885363 1957985 system_pods.go:74] duration metric: took 181.370433ms to wait for pod list to return data ...
	I0327 23:57:40.885379 1957985 default_sa.go:34] waiting for default service account to be created ...
	I0327 23:57:41.078236 1957985 default_sa.go:45] found service account: "default"
	I0327 23:57:41.078266 1957985 default_sa.go:55] duration metric: took 192.878418ms for default service account to be created ...
	I0327 23:57:41.078277 1957985 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 23:57:41.089828 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:41.131353 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:41.184086 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:41.205530 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:41.284836 1957985 system_pods.go:86] 18 kube-system pods found
	I0327 23:57:41.284869 1957985 system_pods.go:89] "coredns-76f75df574-g8vgw" [867c0aa5-1d07-4ff2-be4b-9ed7ce403871] Running
	I0327 23:57:41.284880 1957985 system_pods.go:89] "csi-hostpath-attacher-0" [daf424b4-f84b-4a6a-a011-4bfa22212b97] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 23:57:41.284887 1957985 system_pods.go:89] "csi-hostpath-resizer-0" [7f638b92-85f4-415f-a924-86650cfe8dfb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 23:57:41.284942 1957985 system_pods.go:89] "csi-hostpathplugin-6lzmh" [df3aa80c-fed2-47a2-b856-111e7de3128b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 23:57:41.284949 1957985 system_pods.go:89] "etcd-addons-482679" [f70527f9-6a15-4ab5-8778-24cee184984b] Running
	I0327 23:57:41.284959 1957985 system_pods.go:89] "kindnet-425ft" [54e31491-ad3f-4dab-8464-853f25f30101] Running
	I0327 23:57:41.284964 1957985 system_pods.go:89] "kube-apiserver-addons-482679" [1502c7af-0780-41fa-a22e-6c294015cca2] Running
	I0327 23:57:41.284968 1957985 system_pods.go:89] "kube-controller-manager-addons-482679" [7de1d5fa-2fc1-4214-8882-e6338a7d0b2c] Running
	I0327 23:57:41.284990 1957985 system_pods.go:89] "kube-ingress-dns-minikube" [ba1e77e0-74b3-4260-8083-f6d10de6cff7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0327 23:57:41.285003 1957985 system_pods.go:89] "kube-proxy-27xjv" [2251c29f-a21a-47c4-bbde-6205c6556081] Running
	I0327 23:57:41.285009 1957985 system_pods.go:89] "kube-scheduler-addons-482679" [6da7eaa9-257a-4896-afb6-860a4d96f8fe] Running
	I0327 23:57:41.285018 1957985 system_pods.go:89] "metrics-server-69cf46c98-txgn5" [e0e41c2b-b28d-474c-81f6-204fae8b58f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0327 23:57:41.285023 1957985 system_pods.go:89] "nvidia-device-plugin-daemonset-mrhg6" [aa83998b-3a9a-4746-abdb-f97f818000d6] Running
	I0327 23:57:41.285038 1957985 system_pods.go:89] "registry-c6js5" [f3baf2dc-389c-478c-8148-510e917e380b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0327 23:57:41.285045 1957985 system_pods.go:89] "registry-proxy-mgwql" [5d95c02e-cd9e-4a2f-8b0b-0e6e7f131536] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 23:57:41.285070 1957985 system_pods.go:89] "snapshot-controller-58dbcc7b99-j6jxb" [f5f560d1-85b1-4430-9933-76522bc5156a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:57:41.285091 1957985 system_pods.go:89] "snapshot-controller-58dbcc7b99-xf7gf" [90d05ce1-64eb-425c-a5c3-9cc16d6d5458] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:57:41.285108 1957985 system_pods.go:89] "storage-provisioner" [ceceb59c-5b5a-4651-83fe-54d165f371b0] Running
	I0327 23:57:41.285124 1957985 system_pods.go:126] duration metric: took 206.839589ms to wait for k8s-apps to be running ...
	I0327 23:57:41.285132 1957985 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 23:57:41.285208 1957985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:57:41.297420 1957985 system_svc.go:56] duration metric: took 12.278712ms WaitForService to wait for kubelet
	I0327 23:57:41.297450 1957985 kubeadm.go:576] duration metric: took 14.073873624s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:57:41.297472 1957985 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:57:41.478862 1957985 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 23:57:41.478898 1957985 node_conditions.go:123] node cpu capacity is 2
	I0327 23:57:41.478913 1957985 node_conditions.go:105] duration metric: took 181.435237ms to run NodePressure ...
	I0327 23:57:41.478947 1957985 start.go:240] waiting for startup goroutines ...
	I0327 23:57:41.589817 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:41.631066 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:41.684561 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:41.704678 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:42.093736 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:42.137208 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:42.189655 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:42.216503 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:42.589767 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:42.631531 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:42.684317 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:42.705154 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:43.089685 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:43.131794 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:43.184234 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:43.205398 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:43.590150 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:43.631937 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:43.684659 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:43.704123 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:44.091764 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:44.132215 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:44.183789 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:44.205389 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:44.591428 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:44.631103 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:44.683582 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:44.705498 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:45.099434 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:45.150086 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:45.187584 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:45.219200 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:45.588931 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:45.630540 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:45.684251 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:45.705509 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:46.089575 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:46.131460 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:46.184585 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:46.205838 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:46.590089 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:46.631326 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:46.684522 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:46.704275 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:47.089813 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:47.131566 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:47.184516 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:47.204961 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:47.589391 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:47.630805 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:47.683867 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:47.704315 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:48.088598 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:48.132660 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:48.184264 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:48.205122 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:48.591898 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:48.631327 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:48.684612 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:48.707063 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:49.089605 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:49.131522 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:49.184440 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:49.205407 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:49.589809 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:49.631738 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:49.684351 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:49.705455 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:50.090143 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:50.134163 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:50.191355 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:50.205859 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:50.589042 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:50.631373 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:50.684909 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:50.704229 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:51.088935 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:51.131068 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:51.184013 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:51.204887 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:51.588895 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:51.631209 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:51.684073 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:51.704810 1957985 kapi.go:107] duration metric: took 16.506672994s to wait for kubernetes.io/minikube-addons=registry ...
	I0327 23:57:52.089435 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:52.131101 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:52.184899 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:52.589103 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:52.631488 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:52.684695 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:53.089508 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:53.130810 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:53.186065 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:53.589432 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:53.631458 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:53.688210 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:54.089435 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:54.131407 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:54.183982 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:54.595815 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:54.641096 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:54.693899 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:55.090267 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:55.131826 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:55.185136 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:55.588826 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:55.631280 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:55.684519 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:56.089118 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:56.131615 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:56.184375 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:56.588991 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:56.630780 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:56.684805 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:57.089267 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:57.131330 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:57.184385 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:57.588407 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:57.631070 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:57.685063 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:58.089831 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:58.131442 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:58.184346 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:58.590038 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:58.631213 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:58.684086 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:59.089169 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:59.131636 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:59.184236 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:59.589365 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:59.631640 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:59.685328 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:00.093621 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:00.162082 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:00.246896 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:00.623347 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:00.630914 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:00.684121 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:01.089353 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:01.134350 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:01.190378 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:01.592549 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:01.630705 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:01.684036 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:02.090532 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:02.135285 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:02.184980 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:02.592689 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:02.633800 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:02.698700 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:03.088838 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:03.131293 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:03.189202 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:03.589060 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:03.631023 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:03.683818 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:04.089387 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:04.131446 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:04.183752 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:04.590420 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:04.631652 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:04.683767 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:05.089173 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:05.131714 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:05.184442 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:05.589201 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:05.641646 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:05.684671 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:06.089560 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:06.131645 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:06.184219 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:06.588844 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:06.638478 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:06.685385 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:07.089074 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:07.133530 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:07.184801 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:07.588207 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:07.634182 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:07.684232 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:08.088933 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:08.131609 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:08.184207 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:08.589372 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:08.631662 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:08.694051 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:09.094524 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:09.131042 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:09.185379 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:09.590870 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:09.632311 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:09.684635 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:10.094764 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:10.131390 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:10.184973 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:10.588708 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:10.631357 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:10.684939 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:11.093251 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:11.131120 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:11.183802 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:11.588432 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:11.631218 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:11.683707 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:12.089534 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:12.131603 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:12.184358 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:12.589220 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:12.630791 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:12.684409 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:13.089870 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:13.133395 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:13.184044 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:13.588812 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:13.632242 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:13.684187 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:14.090157 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:14.131480 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:14.184594 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:14.589039 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:14.630924 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:14.683607 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:15.089573 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:15.132383 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:15.184917 1957985 kapi.go:107] duration metric: took 40.005966245s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0327 23:58:15.589892 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:15.633868 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:16.089789 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:16.131634 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:16.588703 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:16.631328 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:17.090082 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:17.131811 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:17.589086 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:17.630966 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:18.088684 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:18.135691 1957985 kapi.go:107] duration metric: took 40.508633342s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0327 23:58:18.138955 1957985 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-482679 cluster.
	I0327 23:58:18.141002 1957985 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0327 23:58:18.143346 1957985 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0327 23:58:18.589454 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:19.087859 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:19.588349 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:20.089519 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:20.588850 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:21.089115 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:21.588657 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:22.088340 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:22.589325 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:23.089059 1957985 kapi.go:107] duration metric: took 46.506181182s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0327 23:58:23.091027 1957985 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0327 23:58:23.092956 1957985 addons.go:505] duration metric: took 55.86914339s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0327 23:58:23.093016 1957985 start.go:245] waiting for cluster config update ...
	I0327 23:58:23.093044 1957985 start.go:254] writing updated cluster config ...
	I0327 23:58:23.093330 1957985 ssh_runner.go:195] Run: rm -f paused
	I0327 23:58:23.421459 1957985 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 23:58:23.423220 1957985 out.go:177] * Done! kubectl is now configured to use "addons-482679" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	f92a28d2b0d0f       dd1b12fcb6097       6 seconds ago        Exited              hello-world-app            2                   7a711e53fd8c0       hello-world-app-5d77478584-zdvkh
	96537110c042f       b8c82647e8a25       34 seconds ago       Running             nginx                      0                   d0fda75a2f88e       nginx
	83571fa875da7       6ef582f3ec844       About a minute ago   Running             gcp-auth                   0                   f178f1acc2749       gcp-auth-7d69788767-52t8m
	87b8f92a82a79       1a024e390dd05       About a minute ago   Exited              patch                      0                   9e5d21e49744a       ingress-nginx-admission-patch-5xfh6
	a81cce371563d       1a024e390dd05       About a minute ago   Exited              create                     0                   0711454d2c9b9       ingress-nginx-admission-create-f7l2t
	8c99736a4ba09       20e3f2db01e81       About a minute ago   Running             yakd                       0                   29d6ac951130a       yakd-dashboard-9947fc6bf-8j2mj
	db3736f924eaf       7ce2150c8929b       About a minute ago   Running             local-path-provisioner     0                   96b7a84a147e4       local-path-provisioner-78b46b4d5c-ddrpj
	e53c41995565f       6727f8bc3105d       About a minute ago   Running             cloud-spanner-emulator     0                   99dcb83b45862       cloud-spanner-emulator-5446596998-qpg92
	2a35ab8a61e7c       2437cf7621777       About a minute ago   Running             coredns                    0                   c962d24dc543d       coredns-76f75df574-g8vgw
	9bcf8fe260ca0       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr   0                   9befe46c90a37       nvidia-device-plugin-daemonset-mrhg6
	63306e1955da8       ba04bb24b9575       2 minutes ago        Running             storage-provisioner        0                   9eb531817361c       storage-provisioner
	3366eca273722       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                0                   c86668c247585       kindnet-425ft
	48bf7168aaa65       0e9b4a0d1e86d       2 minutes ago        Running             kube-proxy                 0                   e8ac5e3fe9404       kube-proxy-27xjv
	3bbf36bc6c552       4b51f9f6bc9b9       2 minutes ago        Running             kube-scheduler             0                   4fbec80ba189e       kube-scheduler-addons-482679
	0e9919976bff8       121d70d9a3805       2 minutes ago        Running             kube-controller-manager    0                   8fee3ecb1d006       kube-controller-manager-addons-482679
	b3e82d379c9aa       014faa467e297       2 minutes ago        Running             etcd                       0                   5509c048432a7       etcd-addons-482679
	def2c471b2a7f       2581114f5709d       2 minutes ago        Running             kube-apiserver             0                   1d10a952c6d6d       kube-apiserver-addons-482679
	
	
	==> containerd <==
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.458397248Z" level=info msg="StopContainer for \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\" returns successfully"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.459006754Z" level=info msg="StopPodSandbox for \"a73c8f04216c728b36f45b1875d661d012d0765e00579e0ef02be1b74c57a7ad\""
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.459082675Z" level=info msg="Container to stop \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.470367999Z" level=warning msg="cleanup warnings time=\"2024-03-27T23:59:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9022 runtime=io.containerd.runc.v2\n"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.473710771Z" level=info msg="StopContainer for \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\" returns successfully"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.475875910Z" level=info msg="StopPodSandbox for \"6bed8194beed91b76b492e7de1011ecc8400fe49cae9a67f42d90a93dc2cbc65\""
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.476049972Z" level=info msg="Container to stop \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.520371384Z" level=info msg="shim disconnected" id=a73c8f04216c728b36f45b1875d661d012d0765e00579e0ef02be1b74c57a7ad
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.521284002Z" level=warning msg="cleaning up after shim disconnected" id=a73c8f04216c728b36f45b1875d661d012d0765e00579e0ef02be1b74c57a7ad namespace=k8s.io
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.521405511Z" level=info msg="cleaning up dead shim"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.529815137Z" level=info msg="shim disconnected" id=6bed8194beed91b76b492e7de1011ecc8400fe49cae9a67f42d90a93dc2cbc65
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.530181354Z" level=warning msg="cleaning up after shim disconnected" id=6bed8194beed91b76b492e7de1011ecc8400fe49cae9a67f42d90a93dc2cbc65 namespace=k8s.io
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.530307048Z" level=info msg="cleaning up dead shim"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.532528415Z" level=warning msg="cleanup warnings time=\"2024-03-27T23:59:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9076 runtime=io.containerd.runc.v2\n"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.544009389Z" level=warning msg="cleanup warnings time=\"2024-03-27T23:59:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9089 runtime=io.containerd.runc.v2\n"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.578895353Z" level=info msg="TearDown network for sandbox \"a73c8f04216c728b36f45b1875d661d012d0765e00579e0ef02be1b74c57a7ad\" successfully"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.578955135Z" level=info msg="StopPodSandbox for \"a73c8f04216c728b36f45b1875d661d012d0765e00579e0ef02be1b74c57a7ad\" returns successfully"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.594226672Z" level=info msg="TearDown network for sandbox \"6bed8194beed91b76b492e7de1011ecc8400fe49cae9a67f42d90a93dc2cbc65\" successfully"
	Mar 27 23:59:31 addons-482679 containerd[761]: time="2024-03-27T23:59:31.594489071Z" level=info msg="StopPodSandbox for \"6bed8194beed91b76b492e7de1011ecc8400fe49cae9a67f42d90a93dc2cbc65\" returns successfully"
	Mar 27 23:59:32 addons-482679 containerd[761]: time="2024-03-27T23:59:32.408001035Z" level=info msg="RemoveContainer for \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\""
	Mar 27 23:59:32 addons-482679 containerd[761]: time="2024-03-27T23:59:32.426520797Z" level=info msg="RemoveContainer for \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\" returns successfully"
	Mar 27 23:59:32 addons-482679 containerd[761]: time="2024-03-27T23:59:32.451920637Z" level=error msg="ContainerStatus for \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\": not found"
	Mar 27 23:59:32 addons-482679 containerd[761]: time="2024-03-27T23:59:32.486684738Z" level=info msg="RemoveContainer for \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\""
	Mar 27 23:59:32 addons-482679 containerd[761]: time="2024-03-27T23:59:32.499060336Z" level=info msg="RemoveContainer for \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\" returns successfully"
	Mar 27 23:59:32 addons-482679 containerd[761]: time="2024-03-27T23:59:32.499651044Z" level=error msg="ContainerStatus for \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\": not found"
	
	
	==> coredns [2a35ab8a61e7c79c31adadc4b51316c6cdb4b0452dbe11c81fedb4b18115ca0b] <==
	[INFO] 10.244.0.19:42842 - 59084 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062802s
	[INFO] 10.244.0.19:42842 - 9694 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070317s
	[INFO] 10.244.0.19:56186 - 11385 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00251108s
	[INFO] 10.244.0.19:42842 - 42528 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001297863s
	[INFO] 10.244.0.19:56186 - 22905 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000162534s
	[INFO] 10.244.0.19:42842 - 57273 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001066528s
	[INFO] 10.244.0.19:42842 - 27836 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000108766s
	[INFO] 10.244.0.19:45096 - 6514 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109948s
	[INFO] 10.244.0.19:38296 - 30539 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056656s
	[INFO] 10.244.0.19:38296 - 42076 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000131478s
	[INFO] 10.244.0.19:45096 - 59091 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064245s
	[INFO] 10.244.0.19:38296 - 39562 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000086572s
	[INFO] 10.244.0.19:45096 - 58255 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047786s
	[INFO] 10.244.0.19:45096 - 39124 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046563s
	[INFO] 10.244.0.19:38296 - 9214 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051569s
	[INFO] 10.244.0.19:45096 - 19296 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047573s
	[INFO] 10.244.0.19:38296 - 7191 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050297s
	[INFO] 10.244.0.19:45096 - 41448 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047786s
	[INFO] 10.244.0.19:38296 - 41843 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049206s
	[INFO] 10.244.0.19:45096 - 56446 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001519087s
	[INFO] 10.244.0.19:38296 - 57806 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001866597s
	[INFO] 10.244.0.19:45096 - 20499 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001129248s
	[INFO] 10.244.0.19:45096 - 24967 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000082165s
	[INFO] 10.244.0.19:38296 - 28149 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002116498s
	[INFO] 10.244.0.19:38296 - 12514 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000519717s
	
	
	==> describe nodes <==
	Name:               addons-482679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-482679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873
	                    minikube.k8s.io/name=addons-482679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T23_57_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-482679
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:57:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-482679
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 23:59:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:59:16 +0000   Wed, 27 Mar 2024 23:57:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:59:16 +0000   Wed, 27 Mar 2024 23:57:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:59:16 +0000   Wed, 27 Mar 2024 23:57:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:59:16 +0000   Wed, 27 Mar 2024 23:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-482679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb86c9df091e457f89e51ed729c30ce0
	  System UUID:                88d9ae12-e4f8-402d-963c-f5713b48548d
	  Boot ID:                    561aadd0-a15d-4e78-9187-a38c38772b44
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-qpg92    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  default                     hello-world-app-5d77478584-zdvkh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-7d69788767-52t8m                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 coredns-76f75df574-g8vgw                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m6s
	  kube-system                 etcd-addons-482679                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m19s
	  kube-system                 kindnet-425ft                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m7s
	  kube-system                 kube-apiserver-addons-482679               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-controller-manager-addons-482679      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-proxy-27xjv                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-scheduler-addons-482679               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 nvidia-device-plugin-daemonset-mrhg6       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-ddrpj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-8j2mj             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node addons-482679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node addons-482679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x7 over 2m27s)  kubelet          Node addons-482679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m19s                  kubelet          Node addons-482679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s                  kubelet          Node addons-482679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s                  kubelet          Node addons-482679 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m19s                  kubelet          Node addons-482679 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m19s                  kubelet          Node addons-482679 status is now: NodeReady
	  Normal  RegisteredNode           2m7s                   node-controller  Node addons-482679 event: Registered Node addons-482679 in Controller
	
	
	==> dmesg <==
	[  +0.000944] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000abecea9f
	[  +0.001109] FS-Cache: N-key=[8] 'e2425c0100000000'
	[  +0.002757] FS-Cache: Duplicate cookie detected
	[  +0.000760] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001099] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=0000000065f36b62
	[  +0.001116] FS-Cache: O-key=[8] 'e2425c0100000000'
	[  +0.000756] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000b46e9eef
	[  +0.001155] FS-Cache: N-key=[8] 'e2425c0100000000'
	[  +1.587855] FS-Cache: Duplicate cookie detected
	[  +0.000858] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=00000000570a01e0
	[  +0.001107] FS-Cache: O-key=[8] 'e1425c0100000000'
	[  +0.000881] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001094] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=0000000004f60fee
	[  +0.001050] FS-Cache: N-key=[8] 'e1425c0100000000'
	[  +0.278051] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000950] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=00000000a0dd260d
	[  +0.001058] FS-Cache: O-key=[8] 'e7425c0100000000'
	[  +0.000856] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.001042] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000e23f78d6
	[  +0.001095] FS-Cache: N-key=[8] 'e7425c0100000000'
	[Mar27 23:23] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [b3e82d379c9aa86026f058d6ef6fa6a500bd426fd6b17584eb588f27c9d1f7a3] <==
	{"level":"info","ts":"2024-03-27T23:57:07.576055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-27T23:57:07.576375Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-27T23:57:07.582272Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-27T23:57:07.582556Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-27T23:57:07.582568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-27T23:57:07.591729Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-27T23:57:07.591781Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-27T23:57:08.141962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-27T23:57:08.142085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-27T23:57:08.142121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-27T23:57:08.142174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-27T23:57:08.142214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-27T23:57:08.142274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-27T23:57:08.142314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-27T23:57:08.150118Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-482679 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T23:57:08.150225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:57:08.150552Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:57:08.162592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-27T23:57:08.162934Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:57:08.163014Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:57:08.181995Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:57:08.182634Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T23:57:08.182733Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T23:57:08.170769Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:57:08.184827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [83571fa875da70d3885554317fa46b9d3c20072e6274ec4e42bbfae96029ac08] <==
	2024/03/27 23:58:17 GCP Auth Webhook started!
	2024/03/27 23:58:34 Ready to marshal response ...
	2024/03/27 23:58:34 Ready to write response ...
	2024/03/27 23:58:47 Ready to marshal response ...
	2024/03/27 23:58:47 Ready to write response ...
	2024/03/27 23:58:57 Ready to marshal response ...
	2024/03/27 23:58:57 Ready to write response ...
	2024/03/27 23:59:07 Ready to marshal response ...
	2024/03/27 23:59:07 Ready to write response ...
	2024/03/27 23:59:15 Ready to marshal response ...
	2024/03/27 23:59:15 Ready to write response ...
	
	
	==> kernel <==
	 23:59:33 up  7:41,  0 users,  load average: 2.67, 2.58, 3.03
	Linux addons-482679 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3366eca273722b9ac99d067d5680385bdcef399d0e6bd8167537433aa723d237] <==
	I0327 23:57:28.976111       1 main.go:227] handling current node
	I0327 23:57:38.990144       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:57:38.990172       1 main.go:227] handling current node
	I0327 23:57:49.004891       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:57:49.004923       1 main.go:227] handling current node
	I0327 23:57:59.018034       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:57:59.018060       1 main.go:227] handling current node
	I0327 23:58:09.022521       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:09.022548       1 main.go:227] handling current node
	I0327 23:58:19.028486       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:19.028515       1 main.go:227] handling current node
	I0327 23:58:29.032741       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:29.032769       1 main.go:227] handling current node
	I0327 23:58:39.046030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:39.046065       1 main.go:227] handling current node
	I0327 23:58:49.059710       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:49.059739       1 main.go:227] handling current node
	I0327 23:58:59.063769       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:59.063797       1 main.go:227] handling current node
	I0327 23:59:09.074040       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:59:09.074068       1 main.go:227] handling current node
	I0327 23:59:19.086731       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:59:19.086760       1 main.go:227] handling current node
	I0327 23:59:29.094098       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:59:29.094130       1 main.go:227] handling current node
	
	
	==> kube-apiserver [def2c471b2a7f41c9953ec79ec1dd19d280412a4b314562fe9930012750407d9] <==
	W0327 23:58:02.697227       1 handler_proxy.go:93] no RequestInfo found in the context
	E0327 23:58:02.697302       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0327 23:58:02.748570       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0327 23:58:51.551008       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0327 23:58:52.618069       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0327 23:58:56.121148       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0327 23:58:57.120873       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0327 23:58:57.571210       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.119.125"}
	E0327 23:58:58.102971       1 watch.go:253] http2: stream closed
	I0327 23:59:03.706937       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0327 23:59:07.302974       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.175.5"}
	I0327 23:59:31.223605       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.223646       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:59:31.257729       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.258032       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:59:31.274471       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.274942       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:59:31.303241       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.303296       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:59:31.314232       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.314288       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0327 23:59:32.275441       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0327 23:59:32.322165       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0327 23:59:32.332810       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [0e9919976bff8676f14254f57114a57c427cbe952df2be16459f9dc54e130dd0] <==
	W0327 23:59:09.556869       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:09.556906       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 23:59:10.153180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="491.032µs"
	I0327 23:59:11.154777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.943µs"
	I0327 23:59:11.214114       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0327 23:59:12.157067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.194µs"
	I0327 23:59:15.504271       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0327 23:59:24.711437       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0327 23:59:24.811957       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I0327 23:59:25.251218       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0327 23:59:25.265358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="6.039µs"
	I0327 23:59:25.290834       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0327 23:59:27.386752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.14µs"
	W0327 23:59:30.403401       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:30.403492       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 23:59:31.360272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="4.587µs"
	E0327 23:59:32.277693       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:32.324315       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:32.334588       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:33.398053       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:33.398087       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:33.482094       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:33.482130       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:33.917553       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:33.917586       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [48bf7168aaa6505e6745f9880d4715e768b65da521579710142f84feeb41824d] <==
	I0327 23:57:28.605763       1 server_others.go:72] "Using iptables proxy"
	I0327 23:57:28.619056       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0327 23:57:28.646608       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0327 23:57:28.646645       1 server_others.go:168] "Using iptables Proxier"
	I0327 23:57:28.648424       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0327 23:57:28.648440       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0327 23:57:28.648472       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 23:57:28.648704       1 server.go:865] "Version info" version="v1.29.3"
	I0327 23:57:28.648715       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:57:28.650224       1 config.go:188] "Starting service config controller"
	I0327 23:57:28.650247       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 23:57:28.650278       1 config.go:97] "Starting endpoint slice config controller"
	I0327 23:57:28.650283       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 23:57:28.652797       1 config.go:315] "Starting node config controller"
	I0327 23:57:28.652816       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 23:57:28.750402       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 23:57:28.750460       1 shared_informer.go:318] Caches are synced for service config
	I0327 23:57:28.753376       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3bbf36bc6c552b04b3ca087a3c2269b74da8d1c351a786b59ab98b4b90efc359] <==
	W0327 23:57:11.406372       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 23:57:11.406428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 23:57:11.406511       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:57:11.406531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:57:11.406670       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 23:57:11.406690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 23:57:11.406768       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 23:57:11.406787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 23:57:11.406925       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 23:57:11.406945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 23:57:11.407105       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 23:57:11.407127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 23:57:11.407638       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 23:57:11.407665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 23:57:11.407749       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 23:57:11.407771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 23:57:11.407948       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 23:57:11.407970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 23:57:11.408136       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 23:57:11.408158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 23:57:11.408307       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 23:57:11.408329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 23:57:11.408916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 23:57:11.408941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0327 23:57:12.999158       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 23:59:28 addons-482679 kubelet[1513]: I0327 23:59:28.679449    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e58310-9491-4e23-8ea4-370141f950a9-kube-api-access-fjgfz" (OuterVolumeSpecName: "kube-api-access-fjgfz") pod "a1e58310-9491-4e23-8ea4-370141f950a9" (UID: "a1e58310-9491-4e23-8ea4-370141f950a9"). InnerVolumeSpecName "kube-api-access-fjgfz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 23:59:28 addons-482679 kubelet[1513]: I0327 23:59:28.681560    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e58310-9491-4e23-8ea4-370141f950a9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a1e58310-9491-4e23-8ea4-370141f950a9" (UID: "a1e58310-9491-4e23-8ea4-370141f950a9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 27 23:59:28 addons-482679 kubelet[1513]: I0327 23:59:28.777826    1513 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1e58310-9491-4e23-8ea4-370141f950a9-webhook-cert\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:28 addons-482679 kubelet[1513]: I0327 23:59:28.777878    1513 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fjgfz\" (UniqueName: \"kubernetes.io/projected/a1e58310-9491-4e23-8ea4-370141f950a9-kube-api-access-fjgfz\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:29 addons-482679 kubelet[1513]: I0327 23:59:29.391439    1513 scope.go:117] "RemoveContainer" containerID="5986c7b940768b55abaad3079dd261558d829009c17585f3f260203239687296"
	Mar 27 23:59:29 addons-482679 kubelet[1513]: I0327 23:59:29.401545    1513 scope.go:117] "RemoveContainer" containerID="5986c7b940768b55abaad3079dd261558d829009c17585f3f260203239687296"
	Mar 27 23:59:29 addons-482679 kubelet[1513]: E0327 23:59:29.402267    1513 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5986c7b940768b55abaad3079dd261558d829009c17585f3f260203239687296\": not found" containerID="5986c7b940768b55abaad3079dd261558d829009c17585f3f260203239687296"
	Mar 27 23:59:29 addons-482679 kubelet[1513]: I0327 23:59:29.402325    1513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5986c7b940768b55abaad3079dd261558d829009c17585f3f260203239687296"} err="failed to get container status \"5986c7b940768b55abaad3079dd261558d829009c17585f3f260203239687296\": rpc error: code = NotFound desc = an error occurred when try to find container \"5986c7b940768b55abaad3079dd261558d829009c17585f3f260203239687296\": not found"
	Mar 27 23:59:30 addons-482679 kubelet[1513]: I0327 23:59:30.060840    1513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e58310-9491-4e23-8ea4-370141f950a9" path="/var/lib/kubelet/pods/a1e58310-9491-4e23-8ea4-370141f950a9/volumes"
	Mar 27 23:59:31 addons-482679 kubelet[1513]: I0327 23:59:31.598075    1513 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp265\" (UniqueName: \"kubernetes.io/projected/90d05ce1-64eb-425c-a5c3-9cc16d6d5458-kube-api-access-hp265\") pod \"90d05ce1-64eb-425c-a5c3-9cc16d6d5458\" (UID: \"90d05ce1-64eb-425c-a5c3-9cc16d6d5458\") "
	Mar 27 23:59:31 addons-482679 kubelet[1513]: I0327 23:59:31.601317    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d05ce1-64eb-425c-a5c3-9cc16d6d5458-kube-api-access-hp265" (OuterVolumeSpecName: "kube-api-access-hp265") pod "90d05ce1-64eb-425c-a5c3-9cc16d6d5458" (UID: "90d05ce1-64eb-425c-a5c3-9cc16d6d5458"). InnerVolumeSpecName "kube-api-access-hp265". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 23:59:31 addons-482679 kubelet[1513]: I0327 23:59:31.698331    1513 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqgxz\" (UniqueName: \"kubernetes.io/projected/f5f560d1-85b1-4430-9933-76522bc5156a-kube-api-access-lqgxz\") pod \"f5f560d1-85b1-4430-9933-76522bc5156a\" (UID: \"f5f560d1-85b1-4430-9933-76522bc5156a\") "
	Mar 27 23:59:31 addons-482679 kubelet[1513]: I0327 23:59:31.698410    1513 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hp265\" (UniqueName: \"kubernetes.io/projected/90d05ce1-64eb-425c-a5c3-9cc16d6d5458-kube-api-access-hp265\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:31 addons-482679 kubelet[1513]: I0327 23:59:31.700311    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f560d1-85b1-4430-9933-76522bc5156a-kube-api-access-lqgxz" (OuterVolumeSpecName: "kube-api-access-lqgxz") pod "f5f560d1-85b1-4430-9933-76522bc5156a" (UID: "f5f560d1-85b1-4430-9933-76522bc5156a"). InnerVolumeSpecName "kube-api-access-lqgxz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 23:59:31 addons-482679 kubelet[1513]: I0327 23:59:31.798586    1513 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lqgxz\" (UniqueName: \"kubernetes.io/projected/f5f560d1-85b1-4430-9933-76522bc5156a-kube-api-access-lqgxz\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:32 addons-482679 kubelet[1513]: I0327 23:59:32.404960    1513 scope.go:117] "RemoveContainer" containerID="9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e"
	Mar 27 23:59:32 addons-482679 kubelet[1513]: I0327 23:59:32.427095    1513 scope.go:117] "RemoveContainer" containerID="9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e"
	Mar 27 23:59:32 addons-482679 kubelet[1513]: E0327 23:59:32.453319    1513 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\": not found" containerID="9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e"
	Mar 27 23:59:32 addons-482679 kubelet[1513]: I0327 23:59:32.453520    1513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e"} err="failed to get container status \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9141abc91fab70be9a69a0639962b6339dd843b23367fdadffd4c540072a2f3e\": not found"
	Mar 27 23:59:32 addons-482679 kubelet[1513]: I0327 23:59:32.453634    1513 scope.go:117] "RemoveContainer" containerID="9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88"
	Mar 27 23:59:32 addons-482679 kubelet[1513]: I0327 23:59:32.499349    1513 scope.go:117] "RemoveContainer" containerID="9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88"
	Mar 27 23:59:32 addons-482679 kubelet[1513]: E0327 23:59:32.499827    1513 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\": not found" containerID="9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88"
	Mar 27 23:59:32 addons-482679 kubelet[1513]: I0327 23:59:32.499872    1513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88"} err="failed to get container status \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\": rpc error: code = NotFound desc = an error occurred when try to find container \"9264a8a5de56e7265173e49e8f23eba1734e47dfecbdbbdb077589b5e7496c88\": not found"
	Mar 27 23:59:34 addons-482679 kubelet[1513]: I0327 23:59:34.051925    1513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d05ce1-64eb-425c-a5c3-9cc16d6d5458" path="/var/lib/kubelet/pods/90d05ce1-64eb-425c-a5c3-9cc16d6d5458/volumes"
	Mar 27 23:59:34 addons-482679 kubelet[1513]: I0327 23:59:34.052390    1513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f560d1-85b1-4430-9933-76522bc5156a" path="/var/lib/kubelet/pods/f5f560d1-85b1-4430-9933-76522bc5156a/volumes"
	
	
	==> storage-provisioner [63306e1955da83a00baddf978af7b1509abe2edf5d97f62bab1b55d5543e0acb] <==
	I0327 23:57:34.068993       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 23:57:34.113332       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 23:57:34.113371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 23:57:34.125857       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 23:57:34.126079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-482679_38ceff6b-df1c-4724-a410-a20da8d23b6a!
	I0327 23:57:34.127007       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f91cafc-f89e-4226-8ddb-30b6386c3c0a", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-482679_38ceff6b-df1c-4724-a410-a20da8d23b6a became leader
	I0327 23:57:34.226465       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-482679_38ceff6b-df1c-4724-a410-a20da8d23b6a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-482679 -n addons-482679
helpers_test.go:261: (dbg) Run:  kubectl --context addons-482679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-482679 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-482679 --alsologtostderr -v=1: exit status 11 (718.761621ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 23:59:46.115520 1969663 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:59:46.116428 1969663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:59:46.116476 1969663 out.go:304] Setting ErrFile to fd 2...
	I0327 23:59:46.116499 1969663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:59:46.116799 1969663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0327 23:59:46.117122 1969663 mustload.go:65] Loading cluster: addons-482679
	I0327 23:59:46.117565 1969663 config.go:182] Loaded profile config "addons-482679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 23:59:46.117607 1969663 addons.go:597] checking whether the cluster is paused
	I0327 23:59:46.117749 1969663 config.go:182] Loaded profile config "addons-482679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 23:59:46.117779 1969663 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:59:46.118321 1969663 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:59:46.147269 1969663 ssh_runner.go:195] Run: systemctl --version
	I0327 23:59:46.147331 1969663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:59:46.191227 1969663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:59:46.315107 1969663 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0327 23:59:46.315204 1969663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 23:59:46.373522 1969663 cri.go:89] found id: "2a35ab8a61e7c79c31adadc4b51316c6cdb4b0452dbe11c81fedb4b18115ca0b"
	I0327 23:59:46.373548 1969663 cri.go:89] found id: "63306e1955da83a00baddf978af7b1509abe2edf5d97f62bab1b55d5543e0acb"
	I0327 23:59:46.373553 1969663 cri.go:89] found id: "3366eca273722b9ac99d067d5680385bdcef399d0e6bd8167537433aa723d237"
	I0327 23:59:46.373557 1969663 cri.go:89] found id: "48bf7168aaa6505e6745f9880d4715e768b65da521579710142f84feeb41824d"
	I0327 23:59:46.373565 1969663 cri.go:89] found id: "3bbf36bc6c552b04b3ca087a3c2269b74da8d1c351a786b59ab98b4b90efc359"
	I0327 23:59:46.373569 1969663 cri.go:89] found id: "0e9919976bff8676f14254f57114a57c427cbe952df2be16459f9dc54e130dd0"
	I0327 23:59:46.373572 1969663 cri.go:89] found id: "b3e82d379c9aa86026f058d6ef6fa6a500bd426fd6b17584eb588f27c9d1f7a3"
	I0327 23:59:46.373575 1969663 cri.go:89] found id: "def2c471b2a7f41c9953ec79ec1dd19d280412a4b314562fe9930012750407d9"
	I0327 23:59:46.373578 1969663 cri.go:89] found id: ""
	I0327 23:59:46.373630 1969663 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0327 23:59:46.437550 1969663 out.go:177] 
	W0327 23:59:46.439008 1969663 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-27T23:59:46Z" level=error msg="stat /run/containerd/runc/k8s.io/db1cb5fa8dec485e981cfa17582265360b251014bc0a5d4e002570aecbba4041: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-27T23:59:46Z" level=error msg="stat /run/containerd/runc/k8s.io/db1cb5fa8dec485e981cfa17582265360b251014bc0a5d4e002570aecbba4041: no such file or directory"
	
	W0327 23:59:46.439042 1969663 out.go:239] * 
	* 
	W0327 23:59:46.723716 1969663 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 23:59:46.725800 1969663 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-482679 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-482679
helpers_test.go:235: (dbg) docker inspect addons-482679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c",
	        "Created": "2024-03-27T23:56:48.813518542Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1958426,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-27T23:56:49.067602775Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f9b5358e8c18dbe49e632154cad75e0968b2e103f621caff2c3ed996f4155861",
	        "ResolvConfPath": "/var/lib/docker/containers/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c/hostname",
	        "HostsPath": "/var/lib/docker/containers/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c/hosts",
	        "LogPath": "/var/lib/docker/containers/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c/73a498ce008bf9d24718094301968ce68f9a7671db39235ff1f680c26f3a394c-json.log",
	        "Name": "/addons-482679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-482679:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-482679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9aa8ed8b93287c234ae3b78023e0f7c5e3b43641dec84ad9d090c99c8fb41740-init/diff:/var/lib/docker/overlay2/07f877cb7d661b8e8bf24e390c9cea61396c20d4f4c8c6395f4b5d699fc104ea/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9aa8ed8b93287c234ae3b78023e0f7c5e3b43641dec84ad9d090c99c8fb41740/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9aa8ed8b93287c234ae3b78023e0f7c5e3b43641dec84ad9d090c99c8fb41740/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9aa8ed8b93287c234ae3b78023e0f7c5e3b43641dec84ad9d090c99c8fb41740/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-482679",
	                "Source": "/var/lib/docker/volumes/addons-482679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-482679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-482679",
	                "name.minikube.sigs.k8s.io": "addons-482679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74d9a91cc060cfa9326d2e400f0bb3c3b25e70aac9db767a505adeb7ab3918a5",
	            "SandboxKey": "/var/run/docker/netns/74d9a91cc060",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35039"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35038"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35035"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35037"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35036"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-482679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "79ac8fcdbbc0a391be86ab0bec214c18d164730c8da1a82ae7b163a5da0a41d5",
	                    "EndpointID": "681fdcfee3fd4ed4a68b2d0d61718f8a427a943f534e277f98419cdb612de316",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-482679",
	                        "73a498ce008b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-482679 -n addons-482679
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-482679 logs -n 25: (1.438026969s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-136920                                                                     | download-only-136920   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-223414                                                                     | download-only-223414   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-984922                                                                     | download-only-984922   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| start   | --download-only -p                                                                          | download-docker-147301 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | download-docker-147301                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	| delete  | -p download-docker-147301                                                                   | download-docker-147301 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-677516   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | binary-mirror-677516                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:43099                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-677516                                                                     | binary-mirror-677516   | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | addons-482679                                                                               |                        |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | addons-482679                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-482679 --wait=true                                                                | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	| ip      | addons-482679 ip                                                                            | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	| addons  | addons-482679 addons disable                                                                | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-482679 addons                                                                        | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | disable metrics-server                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | addons-482679                                                                               |                        |         |                |                     |                     |
	| ssh     | addons-482679 ssh curl -s                                                                   | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |                |                     |                     |
	| ip      | addons-482679 ip                                                                            | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	| addons  | addons-482679 addons disable                                                                | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-482679 addons                                                                        | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-482679 addons disable                                                                | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |                |                     |                     |
	| addons  | addons-482679 addons                                                                        | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | -p addons-482679                                                                            |                        |         |                |                     |                     |
	| ssh     | addons-482679 ssh cat                                                                       | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | /opt/local-path-provisioner/pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC | 27 Mar 24 23:59 UTC |
	|         | addons-482679                                                                               |                        |         |                |                     |                     |
	| addons  | addons-482679 addons disable                                                                | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-482679          | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:59 UTC |                     |
	|         | -p addons-482679                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:56:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:56:25.103821 1957985 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:56:25.104003 1957985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:56:25.104032 1957985 out.go:304] Setting ErrFile to fd 2...
	I0327 23:56:25.104039 1957985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:56:25.104334 1957985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0327 23:56:25.104862 1957985 out.go:298] Setting JSON to false
	I0327 23:56:25.105836 1957985 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27523,"bootTime":1711556262,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 23:56:25.105942 1957985 start.go:139] virtualization:  
	I0327 23:56:25.138838 1957985 out.go:177] * [addons-482679] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 23:56:25.165848 1957985 out.go:177]   - MINIKUBE_LOCATION=18158
	I0327 23:56:25.165949 1957985 notify.go:220] Checking for updates...
	I0327 23:56:25.234606 1957985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:56:25.267060 1957985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0327 23:56:25.298114 1957985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0327 23:56:25.330529 1957985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 23:56:25.347615 1957985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:56:25.374797 1957985 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:56:25.392412 1957985 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 23:56:25.392534 1957985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:56:25.449949 1957985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 23:56:25.43958026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:56:25.450091 1957985 docker.go:295] overlay module found
	I0327 23:56:25.491215 1957985 out.go:177] * Using the docker driver based on user configuration
	I0327 23:56:25.523602 1957985 start.go:297] selected driver: docker
	I0327 23:56:25.523629 1957985 start.go:901] validating driver "docker" against <nil>
	I0327 23:56:25.523646 1957985 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:56:25.524298 1957985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:56:25.580901 1957985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 23:56:25.571893909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:56:25.581076 1957985 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:56:25.581343 1957985 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:56:25.619352 1957985 out.go:177] * Using Docker driver with root privileges
	I0327 23:56:25.651401 1957985 cni.go:84] Creating CNI manager for ""
	I0327 23:56:25.651440 1957985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:56:25.651451 1957985 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 23:56:25.651544 1957985 start.go:340] cluster config:
	{Name:addons-482679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:56:25.684283 1957985 out.go:177] * Starting "addons-482679" primary control-plane node in "addons-482679" cluster
	I0327 23:56:25.711895 1957985 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 23:56:25.741467 1957985 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 23:56:25.764392 1957985 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 23:56:25.764456 1957985 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0327 23:56:25.764466 1957985 cache.go:56] Caching tarball of preloaded images
	I0327 23:56:25.764586 1957985 preload.go:173] Found /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 23:56:25.764596 1957985 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0327 23:56:25.764921 1957985 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 23:56:25.764936 1957985 profile.go:142] Saving config to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/config.json ...
	I0327 23:56:25.764966 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/config.json: {Name:mk6d358cd17889935dbc60afe70eb10b4aa4e09d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:25.777611 1957985 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 23:56:25.777731 1957985 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 23:56:25.777756 1957985 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 23:56:25.777762 1957985 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 23:56:25.777777 1957985 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 23:56:25.777794 1957985 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 from local cache
	I0327 23:56:41.827268 1957985 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 from cached tarball
	I0327 23:56:41.827309 1957985 cache.go:194] Successfully downloaded all kic artifacts
	I0327 23:56:41.827339 1957985 start.go:360] acquireMachinesLock for addons-482679: {Name:mkfd4bcc4f7e46622616fd324fcf1ee8bd5a31ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:56:41.827991 1957985 start.go:364] duration metric: took 621.813µs to acquireMachinesLock for "addons-482679"
	I0327 23:56:41.828024 1957985 start.go:93] Provisioning new machine with config: &{Name:addons-482679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 23:56:41.828134 1957985 start.go:125] createHost starting for "" (driver="docker")
	I0327 23:56:41.830088 1957985 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0327 23:56:41.830326 1957985 start.go:159] libmachine.API.Create for "addons-482679" (driver="docker")
	I0327 23:56:41.830357 1957985 client.go:168] LocalClient.Create starting
	I0327 23:56:41.830457 1957985 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem
	I0327 23:56:42.188094 1957985 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem
	I0327 23:56:42.414264 1957985 cli_runner.go:164] Run: docker network inspect addons-482679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0327 23:56:42.426989 1957985 cli_runner.go:211] docker network inspect addons-482679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0327 23:56:42.427074 1957985 network_create.go:281] running [docker network inspect addons-482679] to gather additional debugging logs...
	I0327 23:56:42.427094 1957985 cli_runner.go:164] Run: docker network inspect addons-482679
	W0327 23:56:42.440650 1957985 cli_runner.go:211] docker network inspect addons-482679 returned with exit code 1
	I0327 23:56:42.440683 1957985 network_create.go:284] error running [docker network inspect addons-482679]: docker network inspect addons-482679: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-482679 not found
	I0327 23:56:42.440697 1957985 network_create.go:286] output of [docker network inspect addons-482679]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-482679 not found
	
	** /stderr **
	I0327 23:56:42.440814 1957985 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 23:56:42.455145 1957985 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025c0170}
	I0327 23:56:42.455187 1957985 network_create.go:124] attempt to create docker network addons-482679 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0327 23:56:42.455250 1957985 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-482679 addons-482679
	I0327 23:56:42.512310 1957985 network_create.go:108] docker network addons-482679 192.168.49.0/24 created
	I0327 23:56:42.512344 1957985 kic.go:121] calculated static IP "192.168.49.2" for the "addons-482679" container
	I0327 23:56:42.512418 1957985 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0327 23:56:42.524433 1957985 cli_runner.go:164] Run: docker volume create addons-482679 --label name.minikube.sigs.k8s.io=addons-482679 --label created_by.minikube.sigs.k8s.io=true
	I0327 23:56:42.538485 1957985 oci.go:103] Successfully created a docker volume addons-482679
	I0327 23:56:42.538585 1957985 cli_runner.go:164] Run: docker run --rm --name addons-482679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-482679 --entrypoint /usr/bin/test -v addons-482679:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib
	I0327 23:56:44.455615 1957985 cli_runner.go:217] Completed: docker run --rm --name addons-482679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-482679 --entrypoint /usr/bin/test -v addons-482679:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib: (1.916970324s)
	I0327 23:56:44.455645 1957985 oci.go:107] Successfully prepared a docker volume addons-482679
	I0327 23:56:44.455679 1957985 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 23:56:44.455704 1957985 kic.go:194] Starting extracting preloaded images to volume ...
	I0327 23:56:44.455784 1957985 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-482679:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0327 23:56:48.743861 1957985 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-482679:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.288038637s)
	I0327 23:56:48.743893 1957985 kic.go:203] duration metric: took 4.288185959s to extract preloaded images to volume ...
	W0327 23:56:48.744043 1957985 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0327 23:56:48.744180 1957985 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0327 23:56:48.800638 1957985 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-482679 --name addons-482679 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-482679 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-482679 --network addons-482679 --ip 192.168.49.2 --volume addons-482679:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8
	I0327 23:56:49.076363 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Running}}
	I0327 23:56:49.091738 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:56:49.113582 1957985 cli_runner.go:164] Run: docker exec addons-482679 stat /var/lib/dpkg/alternatives/iptables
	I0327 23:56:49.194587 1957985 oci.go:144] the created container "addons-482679" has a running status.
	I0327 23:56:49.194619 1957985 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa...
	I0327 23:56:49.915884 1957985 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0327 23:56:49.939062 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:56:49.959760 1957985 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0327 23:56:49.959779 1957985 kic_runner.go:114] Args: [docker exec --privileged addons-482679 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0327 23:56:50.010480 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:56:50.033077 1957985 machine.go:94] provisionDockerMachine start ...
	I0327 23:56:50.033188 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:50.050825 1957985 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:50.051092 1957985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35039 <nil> <nil>}
	I0327 23:56:50.051101 1957985 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 23:56:50.179391 1957985 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-482679
	
	I0327 23:56:50.179473 1957985 ubuntu.go:169] provisioning hostname "addons-482679"
	I0327 23:56:50.179577 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:50.198186 1957985 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:50.198427 1957985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35039 <nil> <nil>}
	I0327 23:56:50.198439 1957985 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-482679 && echo "addons-482679" | sudo tee /etc/hostname
	I0327 23:56:50.345936 1957985 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-482679
	
	I0327 23:56:50.346065 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:50.361171 1957985 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:50.361414 1957985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35039 <nil> <nil>}
	I0327 23:56:50.361431 1957985 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-482679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-482679/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-482679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:56:50.482094 1957985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:56:50.482122 1957985 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18158-1951721/.minikube CaCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18158-1951721/.minikube}
	I0327 23:56:50.482147 1957985 ubuntu.go:177] setting up certificates
	I0327 23:56:50.482156 1957985 provision.go:84] configureAuth start
	I0327 23:56:50.482215 1957985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-482679
	I0327 23:56:50.497392 1957985 provision.go:143] copyHostCerts
	I0327 23:56:50.497481 1957985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.pem (1078 bytes)
	I0327 23:56:50.497598 1957985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/cert.pem (1123 bytes)
	I0327 23:56:50.497653 1957985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/key.pem (1675 bytes)
	I0327 23:56:50.497695 1957985 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem org=jenkins.addons-482679 san=[127.0.0.1 192.168.49.2 addons-482679 localhost minikube]
	I0327 23:56:50.919058 1957985 provision.go:177] copyRemoteCerts
	I0327 23:56:50.919153 1957985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:56:50.919217 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:50.933780 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.023177 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:56:51.047902 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 23:56:51.072742 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 23:56:51.097514 1957985 provision.go:87] duration metric: took 615.344563ms to configureAuth
	I0327 23:56:51.097541 1957985 ubuntu.go:193] setting minikube options for container-runtime
	I0327 23:56:51.097730 1957985 config.go:182] Loaded profile config "addons-482679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 23:56:51.097745 1957985 machine.go:97] duration metric: took 1.064649964s to provisionDockerMachine
	I0327 23:56:51.097752 1957985 client.go:171] duration metric: took 9.26738986s to LocalClient.Create
	I0327 23:56:51.097767 1957985 start.go:167] duration metric: took 9.267441634s to libmachine.API.Create "addons-482679"
	I0327 23:56:51.097778 1957985 start.go:293] postStartSetup for "addons-482679" (driver="docker")
	I0327 23:56:51.097788 1957985 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:56:51.097840 1957985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:56:51.097887 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:51.114033 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.203387 1957985 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:56:51.206671 1957985 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 23:56:51.206748 1957985 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 23:56:51.206767 1957985 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 23:56:51.206775 1957985 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0327 23:56:51.206785 1957985 filesync.go:126] Scanning /home/jenkins/minikube-integration/18158-1951721/.minikube/addons for local assets ...
	I0327 23:56:51.206851 1957985 filesync.go:126] Scanning /home/jenkins/minikube-integration/18158-1951721/.minikube/files for local assets ...
	I0327 23:56:51.206881 1957985 start.go:296] duration metric: took 109.097092ms for postStartSetup
	I0327 23:56:51.207185 1957985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-482679
	I0327 23:56:51.223144 1957985 profile.go:142] Saving config to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/config.json ...
	I0327 23:56:51.223465 1957985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 23:56:51.223520 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:51.238484 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.322517 1957985 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0327 23:56:51.326979 1957985 start.go:128] duration metric: took 9.498827247s to createHost
	I0327 23:56:51.327001 1957985 start.go:83] releasing machines lock for "addons-482679", held for 9.498995614s
	I0327 23:56:51.327076 1957985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-482679
	I0327 23:56:51.342497 1957985 ssh_runner.go:195] Run: cat /version.json
	I0327 23:56:51.342550 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:51.342552 1957985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:56:51.342613 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:56:51.363841 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.377847 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:56:51.453155 1957985 ssh_runner.go:195] Run: systemctl --version
	I0327 23:56:51.567261 1957985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 23:56:51.571697 1957985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0327 23:56:51.597057 1957985 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0327 23:56:51.597133 1957985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:56:51.625487 1957985 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0327 23:56:51.625511 1957985 start.go:494] detecting cgroup driver to use...
	I0327 23:56:51.625546 1957985 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 23:56:51.625600 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 23:56:51.637764 1957985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 23:56:51.648648 1957985 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:56:51.648716 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:56:51.662554 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:56:51.677171 1957985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:56:51.764995 1957985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:56:51.859743 1957985 docker.go:233] disabling docker service ...
	I0327 23:56:51.859841 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:56:51.879357 1957985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:56:51.892189 1957985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:56:51.982747 1957985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:56:52.079929 1957985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:56:52.091193 1957985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:56:52.107279 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0327 23:56:52.116873 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 23:56:52.126418 1957985 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 23:56:52.126487 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 23:56:52.135948 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 23:56:52.145560 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 23:56:52.155477 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 23:56:52.165979 1957985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:56:52.175366 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 23:56:52.186689 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 23:56:52.197012 1957985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 23:56:52.206702 1957985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:56:52.215358 1957985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:56:52.223936 1957985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:52.312907 1957985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 23:56:52.435767 1957985 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0327 23:56:52.435891 1957985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0327 23:56:52.439539 1957985 start.go:562] Will wait 60s for crictl version
	I0327 23:56:52.439628 1957985 ssh_runner.go:195] Run: which crictl
	I0327 23:56:52.442967 1957985 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:56:52.484243 1957985 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0327 23:56:52.484356 1957985 ssh_runner.go:195] Run: containerd --version
	I0327 23:56:52.506760 1957985 ssh_runner.go:195] Run: containerd --version
	I0327 23:56:52.529493 1957985 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0327 23:56:52.531358 1957985 cli_runner.go:164] Run: docker network inspect addons-482679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 23:56:52.544026 1957985 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0327 23:56:52.547499 1957985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:56:52.558043 1957985 kubeadm.go:877] updating cluster {Name:addons-482679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 23:56:52.558170 1957985 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 23:56:52.558243 1957985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:56:52.594366 1957985 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 23:56:52.594392 1957985 containerd.go:534] Images already preloaded, skipping extraction
	I0327 23:56:52.594454 1957985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:56:52.631946 1957985 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 23:56:52.631970 1957985 cache_images.go:84] Images are preloaded, skipping loading
	I0327 23:56:52.631978 1957985 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 containerd true true} ...
	I0327 23:56:52.632126 1957985 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-482679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:56:52.632206 1957985 ssh_runner.go:195] Run: sudo crictl info
	I0327 23:56:52.668764 1957985 cni.go:84] Creating CNI manager for ""
	I0327 23:56:52.668792 1957985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:56:52.668803 1957985 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 23:56:52.668854 1957985 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-482679 NodeName:addons-482679 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 23:56:52.669024 1957985 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-482679"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 23:56:52.669094 1957985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:56:52.678213 1957985 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 23:56:52.678293 1957985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 23:56:52.687206 1957985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0327 23:56:52.704717 1957985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:56:52.721627 1957985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0327 23:56:52.739076 1957985 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0327 23:56:52.742260 1957985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:56:52.752455 1957985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:52.837717 1957985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:56:52.853056 1957985 certs.go:68] Setting up /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679 for IP: 192.168.49.2
	I0327 23:56:52.853082 1957985 certs.go:194] generating shared ca certs ...
	I0327 23:56:52.853100 1957985 certs.go:226] acquiring lock for ca certs: {Name:mka210db6b2adfd3b9800e3583e6835c01f5e440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:52.853233 1957985 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key
	I0327 23:56:53.716142 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt ...
	I0327 23:56:53.716202 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt: {Name:mkcc6ede1578f5a347de6ffe474ab99de5073d8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:53.716424 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key ...
	I0327 23:56:53.716458 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key: {Name:mk8f857f2dd2c1d455c5a90020547b011d8eec6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:53.717038 1957985 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key
	I0327 23:56:53.974152 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.crt ...
	I0327 23:56:53.974186 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.crt: {Name:mk3b771debd68a5cc735024221bb41091d98eece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:53.974387 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key ...
	I0327 23:56:53.974401 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key: {Name:mk3cccf9e1b55e3e06259f7782a53a8437a8471d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:53.974958 1957985 certs.go:256] generating profile certs ...
	I0327 23:56:53.975021 1957985 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.key
	I0327 23:56:53.975047 1957985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt with IP's: []
	I0327 23:56:54.650005 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt ...
	I0327 23:56:54.650036 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: {Name:mk72e30acf2bf9a7152dc3fbf5db1dd30cbec821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:54.650223 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.key ...
	I0327 23:56:54.650237 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.key: {Name:mk23530e1a3f8c4c266f8dbc3412ebc17297e558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:54.650857 1957985 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key.a47cdf3f
	I0327 23:56:54.650881 1957985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt.a47cdf3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0327 23:56:55.109785 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt.a47cdf3f ...
	I0327 23:56:55.109819 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt.a47cdf3f: {Name:mkdb5d7599b3b845f49ee9d8ca83968b8a936680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:55.110685 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key.a47cdf3f ...
	I0327 23:56:55.110709 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key.a47cdf3f: {Name:mkcf6dc97e40955d49dfca8aa8d37604bc67f21b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:55.111331 1957985 certs.go:381] copying /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt.a47cdf3f -> /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt
	I0327 23:56:55.111436 1957985 certs.go:385] copying /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key.a47cdf3f -> /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key
	I0327 23:56:55.111499 1957985 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.key
	I0327 23:56:55.111522 1957985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.crt with IP's: []
	I0327 23:56:55.681366 1957985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.crt ...
	I0327 23:56:55.681398 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.crt: {Name:mkd455035484f5918f196672dd737af39c96f54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:55.681587 1957985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.key ...
	I0327 23:56:55.681601 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.key: {Name:mkd1b6a6029d28676002d627bff2049924cd310c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:55.682465 1957985 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 23:56:55.682516 1957985 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:56:55.682541 1957985 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:56:55.682578 1957985 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem (1675 bytes)
	I0327 23:56:55.683216 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:56:55.707109 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:56:55.730560 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:56:55.754363 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:56:55.777308 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0327 23:56:55.801373 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 23:56:55.824193 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:56:55.847847 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 23:56:55.871080 1957985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:56:55.895885 1957985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 23:56:55.913679 1957985 ssh_runner.go:195] Run: openssl version
	I0327 23:56:55.919049 1957985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:56:55.928642 1957985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:55.932036 1957985 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:55.932126 1957985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:55.939751 1957985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:56:55.949991 1957985 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:56:55.954371 1957985 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:56:55.954459 1957985 kubeadm.go:391] StartCluster: {Name:addons-482679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-482679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:56:55.954558 1957985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0327 23:56:55.954659 1957985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 23:56:55.996574 1957985 cri.go:89] found id: ""
	I0327 23:56:55.996694 1957985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 23:56:56.010704 1957985 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 23:56:56.019982 1957985 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0327 23:56:56.020112 1957985 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 23:56:56.029323 1957985 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 23:56:56.029349 1957985 kubeadm.go:156] found existing configuration files:
	
	I0327 23:56:56.029413 1957985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 23:56:56.038585 1957985 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 23:56:56.038655 1957985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 23:56:56.046826 1957985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 23:56:56.055554 1957985 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 23:56:56.055670 1957985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 23:56:56.064647 1957985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 23:56:56.073852 1957985 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 23:56:56.073940 1957985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 23:56:56.082254 1957985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 23:56:56.091042 1957985 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 23:56:56.091112 1957985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 23:56:56.099561 1957985 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0327 23:56:56.148754 1957985 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 23:56:56.149071 1957985 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 23:56:56.191118 1957985 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0327 23:56:56.191190 1957985 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0327 23:56:56.191228 1957985 kubeadm.go:309] OS: Linux
	I0327 23:56:56.191278 1957985 kubeadm.go:309] CGROUPS_CPU: enabled
	I0327 23:56:56.191334 1957985 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0327 23:56:56.191385 1957985 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0327 23:56:56.191435 1957985 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0327 23:56:56.191487 1957985 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0327 23:56:56.191538 1957985 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0327 23:56:56.191585 1957985 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0327 23:56:56.191635 1957985 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0327 23:56:56.191682 1957985 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0327 23:56:56.258976 1957985 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 23:56:56.259112 1957985 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 23:56:56.259221 1957985 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 23:56:56.481722 1957985 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 23:56:56.487092 1957985 out.go:204]   - Generating certificates and keys ...
	I0327 23:56:56.487234 1957985 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 23:56:56.487328 1957985 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 23:56:57.025787 1957985 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 23:56:57.183213 1957985 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 23:56:57.535410 1957985 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 23:56:57.822464 1957985 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 23:56:58.916031 1957985 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 23:56:58.916353 1957985 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-482679 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0327 23:56:59.740311 1957985 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 23:56:59.740860 1957985 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-482679 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0327 23:56:59.924844 1957985 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 23:57:00.981679 1957985 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 23:57:01.969137 1957985 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 23:57:01.969689 1957985 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 23:57:02.337555 1957985 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 23:57:03.673618 1957985 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 23:57:03.991167 1957985 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 23:57:04.751867 1957985 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 23:57:05.465821 1957985 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 23:57:05.466459 1957985 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 23:57:05.470899 1957985 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 23:57:05.473351 1957985 out.go:204]   - Booting up control plane ...
	I0327 23:57:05.473448 1957985 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 23:57:05.473524 1957985 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 23:57:05.473972 1957985 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 23:57:05.484500 1957985 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 23:57:05.485374 1957985 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 23:57:05.485637 1957985 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 23:57:05.585703 1957985 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 23:57:12.593083 1957985 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.007479 seconds
	I0327 23:57:12.612691 1957985 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 23:57:12.627285 1957985 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 23:57:13.155418 1957985 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 23:57:13.155863 1957985 kubeadm.go:309] [mark-control-plane] Marking the node addons-482679 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 23:57:13.667678 1957985 kubeadm.go:309] [bootstrap-token] Using token: vie3po.0q6g65kpvcijyxvl
	I0327 23:57:13.669393 1957985 out.go:204]   - Configuring RBAC rules ...
	I0327 23:57:13.669509 1957985 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 23:57:13.674654 1957985 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 23:57:13.683741 1957985 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 23:57:13.687762 1957985 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 23:57:13.691573 1957985 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 23:57:13.695358 1957985 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 23:57:13.708874 1957985 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 23:57:13.915329 1957985 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 23:57:14.080675 1957985 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 23:57:14.082142 1957985 kubeadm.go:309] 
	I0327 23:57:14.082260 1957985 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 23:57:14.082278 1957985 kubeadm.go:309] 
	I0327 23:57:14.082353 1957985 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 23:57:14.082368 1957985 kubeadm.go:309] 
	I0327 23:57:14.082401 1957985 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 23:57:14.082568 1957985 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 23:57:14.082628 1957985 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 23:57:14.082638 1957985 kubeadm.go:309] 
	I0327 23:57:14.082691 1957985 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 23:57:14.082700 1957985 kubeadm.go:309] 
	I0327 23:57:14.082746 1957985 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 23:57:14.082753 1957985 kubeadm.go:309] 
	I0327 23:57:14.082804 1957985 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 23:57:14.082890 1957985 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 23:57:14.082961 1957985 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 23:57:14.082971 1957985 kubeadm.go:309] 
	I0327 23:57:14.083053 1957985 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 23:57:14.083131 1957985 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 23:57:14.083140 1957985 kubeadm.go:309] 
	I0327 23:57:14.083222 1957985 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vie3po.0q6g65kpvcijyxvl \
	I0327 23:57:14.083326 1957985 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3196aff351ef2c125525871fc87785f4c92e902d09ef7d39825c387aa49fa380 \
	I0327 23:57:14.083351 1957985 kubeadm.go:309] 	--control-plane 
	I0327 23:57:14.083368 1957985 kubeadm.go:309] 
	I0327 23:57:14.083564 1957985 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 23:57:14.083576 1957985 kubeadm.go:309] 
	I0327 23:57:14.083659 1957985 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vie3po.0q6g65kpvcijyxvl \
	I0327 23:57:14.083764 1957985 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3196aff351ef2c125525871fc87785f4c92e902d09ef7d39825c387aa49fa380 
	I0327 23:57:14.086985 1957985 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0327 23:57:14.087199 1957985 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 23:57:14.087225 1957985 cni.go:84] Creating CNI manager for ""
	I0327 23:57:14.087248 1957985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:57:14.090370 1957985 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0327 23:57:14.091804 1957985 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0327 23:57:14.098009 1957985 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0327 23:57:14.098034 1957985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0327 23:57:14.137062 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0327 23:57:14.464355 1957985 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 23:57:14.464488 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:14.464600 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-482679 minikube.k8s.io/updated_at=2024_03_27T23_57_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873 minikube.k8s.io/name=addons-482679 minikube.k8s.io/primary=true
	I0327 23:57:14.609612 1957985 ops.go:34] apiserver oom_adj: -16
	I0327 23:57:14.609746 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:15.110620 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:15.610381 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:16.110601 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:16.609933 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:17.110417 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:17.609896 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:18.110012 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:18.610643 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:19.109882 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:19.609848 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:20.110270 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:20.610704 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:21.110312 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:21.610685 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:22.110702 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:22.610070 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:23.110373 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:23.610499 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:24.110242 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:24.610184 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:25.110857 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:25.609827 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:26.110379 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:26.610209 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:27.110642 1957985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:57:27.222458 1957985 kubeadm.go:1107] duration metric: took 12.758015578s to wait for elevateKubeSystemPrivileges
	W0327 23:57:27.222489 1957985 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 23:57:27.222496 1957985 kubeadm.go:393] duration metric: took 31.268068211s to StartCluster
	I0327 23:57:27.222512 1957985 settings.go:142] acquiring lock: {Name:mk8bd0eb5f984b7df18eb5fe3af15aec887e343a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:57:27.222986 1957985 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0327 23:57:27.223372 1957985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/kubeconfig: {Name:mk4e0e309c01b086d75fed1e6a33183905fae8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:57:27.223553 1957985 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 23:57:27.225347 1957985 out.go:177] * Verifying Kubernetes components...
	I0327 23:57:27.223633 1957985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 23:57:27.223794 1957985 config.go:182] Loaded profile config "addons-482679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 23:57:27.223802 1957985 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0327 23:57:27.227241 1957985 addons.go:69] Setting yakd=true in profile "addons-482679"
	I0327 23:57:27.227271 1957985 addons.go:234] Setting addon yakd=true in "addons-482679"
	I0327 23:57:27.227301 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.227753 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.227911 1957985 addons.go:69] Setting ingress=true in profile "addons-482679"
	I0327 23:57:27.227931 1957985 addons.go:234] Setting addon ingress=true in "addons-482679"
	I0327 23:57:27.227958 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.228349 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.229460 1957985 addons.go:69] Setting ingress-dns=true in profile "addons-482679"
	I0327 23:57:27.229492 1957985 addons.go:234] Setting addon ingress-dns=true in "addons-482679"
	I0327 23:57:27.229522 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.229969 1957985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:57:27.230161 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.232584 1957985 addons.go:69] Setting cloud-spanner=true in profile "addons-482679"
	I0327 23:57:27.232758 1957985 addons.go:234] Setting addon cloud-spanner=true in "addons-482679"
	I0327 23:57:27.232889 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.233798 1957985 addons.go:69] Setting inspektor-gadget=true in profile "addons-482679"
	I0327 23:57:27.233822 1957985 addons.go:234] Setting addon inspektor-gadget=true in "addons-482679"
	I0327 23:57:27.233848 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.234398 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.234928 1957985 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-482679"
	I0327 23:57:27.234995 1957985 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-482679"
	I0327 23:57:27.235033 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.244245 1957985 addons.go:69] Setting metrics-server=true in profile "addons-482679"
	I0327 23:57:27.244376 1957985 addons.go:234] Setting addon metrics-server=true in "addons-482679"
	I0327 23:57:27.244451 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.245043 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.245317 1957985 addons.go:69] Setting default-storageclass=true in profile "addons-482679"
	I0327 23:57:27.245369 1957985 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-482679"
	I0327 23:57:27.245641 1957985 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-482679"
	I0327 23:57:27.245683 1957985 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-482679"
	I0327 23:57:27.245734 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.255223 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.255551 1957985 addons.go:69] Setting gcp-auth=true in profile "addons-482679"
	I0327 23:57:27.255640 1957985 mustload.go:65] Loading cluster: addons-482679
	I0327 23:57:27.262041 1957985 config.go:182] Loaded profile config "addons-482679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 23:57:27.262447 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.268619 1957985 addons.go:69] Setting registry=true in profile "addons-482679"
	I0327 23:57:27.268655 1957985 addons.go:234] Setting addon registry=true in "addons-482679"
	I0327 23:57:27.268696 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.269113 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.256300 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.285461 1957985 addons.go:69] Setting storage-provisioner=true in profile "addons-482679"
	I0327 23:57:27.285493 1957985 addons.go:234] Setting addon storage-provisioner=true in "addons-482679"
	I0327 23:57:27.285530 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.256826 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.310242 1957985 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-482679"
	I0327 23:57:27.310320 1957985 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-482679"
	I0327 23:57:27.284037 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.316817 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.336780 1957985 addons.go:69] Setting volumesnapshots=true in profile "addons-482679"
	I0327 23:57:27.336874 1957985 addons.go:234] Setting addon volumesnapshots=true in "addons-482679"
	I0327 23:57:27.336941 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.337453 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.344804 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.379758 1957985 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:57:27.382116 1957985 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0327 23:57:27.383778 1957985 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:57:27.388324 1957985 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 23:57:27.388346 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0327 23:57:27.388412 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.395062 1957985 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0327 23:57:27.401554 1957985 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0327 23:57:27.431014 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0327 23:57:27.431038 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0327 23:57:27.431106 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.457358 1957985 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0327 23:57:27.472209 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0327 23:57:27.420936 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0327 23:57:27.420949 1957985 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0327 23:57:27.465953 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.467412 1957985 addons.go:234] Setting addon default-storageclass=true in "addons-482679"
	I0327 23:57:27.472184 1957985 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0327 23:57:27.478334 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0327 23:57:27.481667 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.481680 1957985 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0327 23:57:27.481690 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0327 23:57:27.486141 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0327 23:57:27.486216 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.488415 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.488511 1957985 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 23:57:27.488517 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0327 23:57:27.488546 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.490008 1957985 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0327 23:57:27.490014 1957985 out.go:177]   - Using image docker.io/registry:2.8.3
	I0327 23:57:27.511080 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0327 23:57:27.514086 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0327 23:57:27.511303 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0327 23:57:27.497431 1957985 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 23:57:27.517546 1957985 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-482679"
	I0327 23:57:27.518156 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.520419 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 23:57:27.520432 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 23:57:27.523347 1957985 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 23:57:27.523447 1957985 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 23:57:27.523503 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:27.526703 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0327 23:57:27.528034 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0327 23:57:27.528619 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.531201 1957985 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0327 23:57:27.535322 1957985 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0327 23:57:27.535341 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0327 23:57:27.535405 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.533628 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:27.542156 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.550776 1957985 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:57:27.550797 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 23:57:27.550860 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.528246 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0327 23:57:27.558100 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.572874 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0327 23:57:27.583602 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0327 23:57:27.585574 1957985 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0327 23:57:27.528226 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.590264 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 23:57:27.590298 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0327 23:57:27.590387 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.611068 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.612223 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.659870 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.699710 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.710290 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.741307 1957985 out.go:177]   - Using image docker.io/busybox:stable
	I0327 23:57:27.739006 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.745757 1957985 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0327 23:57:27.744789 1957985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 23:57:27.744840 1957985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:57:27.746467 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.748329 1957985 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 23:57:27.748564 1957985 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 23:57:27.748578 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 23:57:27.748682 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.749971 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0327 23:57:27.750062 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:27.754927 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.757622 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.772715 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.805655 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.807219 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:27.934291 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0327 23:57:27.934315 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0327 23:57:27.978178 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 23:57:28.001946 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 23:57:28.001973 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0327 23:57:28.100684 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 23:57:28.144313 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0327 23:57:28.144341 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0327 23:57:28.171939 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0327 23:57:28.180074 1957985 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 23:57:28.180153 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0327 23:57:28.182570 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 23:57:28.191360 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0327 23:57:28.191422 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0327 23:57:28.217475 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 23:57:28.221210 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:57:28.257804 1957985 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0327 23:57:28.257885 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0327 23:57:28.296301 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 23:57:28.296382 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0327 23:57:28.325749 1957985 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 23:57:28.325820 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0327 23:57:28.395506 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0327 23:57:28.395585 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0327 23:57:28.416043 1957985 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 23:57:28.416141 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0327 23:57:28.453510 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 23:57:28.463451 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 23:57:28.463521 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0327 23:57:28.625661 1957985 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0327 23:57:28.625723 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0327 23:57:28.644932 1957985 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 23:57:28.644955 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 23:57:28.710615 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0327 23:57:28.715727 1957985 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 23:57:28.715795 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0327 23:57:28.739619 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 23:57:28.739685 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0327 23:57:28.774006 1957985 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0327 23:57:28.774075 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0327 23:57:28.815095 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 23:57:28.815171 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0327 23:57:28.864647 1957985 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 23:57:28.864719 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 23:57:28.884574 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 23:57:28.884636 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0327 23:57:29.088013 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 23:57:29.088095 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0327 23:57:29.115008 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 23:57:29.115081 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0327 23:57:29.132340 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0327 23:57:29.354268 1957985 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:57:29.354342 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0327 23:57:29.356980 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 23:57:29.359220 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 23:57:29.359290 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0327 23:57:29.513392 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0327 23:57:29.513465 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0327 23:57:29.532342 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:57:29.624190 1957985 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 23:57:29.624265 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0327 23:57:29.776965 1957985 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 23:57:29.777040 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0327 23:57:29.848984 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 23:57:29.849056 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0327 23:57:30.015005 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 23:57:30.015086 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0327 23:57:30.043765 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 23:57:30.145147 1957985 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.396759052s)
	I0327 23:57:30.145442 1957985 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.397402715s)
	I0327 23:57:30.145600 1957985 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0327 23:57:30.147537 1957985 node_ready.go:35] waiting up to 6m0s for node "addons-482679" to be "Ready" ...
	I0327 23:57:30.155281 1957985 node_ready.go:49] node "addons-482679" has status "Ready":"True"
	I0327 23:57:30.155314 1957985 node_ready.go:38] duration metric: took 7.683329ms for node "addons-482679" to be "Ready" ...
	I0327 23:57:30.155327 1957985 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:57:30.174255 1957985 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-g8vgw" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:30.275326 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 23:57:30.275389 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0327 23:57:30.505856 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 23:57:30.505939 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0327 23:57:30.671787 1957985 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-482679" context rescaled to 1 replicas
	I0327 23:57:30.916049 1957985 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 23:57:30.916128 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0327 23:57:31.300438 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 23:57:32.205497 1957985 pod_ready.go:102] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"False"
	I0327 23:57:34.495559 1957985 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0327 23:57:34.495723 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:34.518301 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:34.708805 1957985 pod_ready.go:102] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"False"
	I0327 23:57:34.841969 1957985 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0327 23:57:34.929385 1957985 addons.go:234] Setting addon gcp-auth=true in "addons-482679"
	I0327 23:57:34.929438 1957985 host.go:66] Checking if "addons-482679" exists ...
	I0327 23:57:34.929869 1957985 cli_runner.go:164] Run: docker container inspect addons-482679 --format={{.State.Status}}
	I0327 23:57:34.952436 1957985 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0327 23:57:34.952492 1957985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-482679
	I0327 23:57:34.974226 1957985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35039 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/addons-482679/id_rsa Username:docker}
	I0327 23:57:35.148895 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.170677415s)
	I0327 23:57:35.148943 1957985 addons.go:470] Verifying addon ingress=true in "addons-482679"
	I0327 23:57:35.161582 1957985 out.go:177] * Verifying ingress addon...
	I0327 23:57:35.149117 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.048404811s)
	I0327 23:57:35.149210 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.97714425s)
	I0327 23:57:35.149273 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.966646343s)
	I0327 23:57:35.149294 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.93174845s)
	I0327 23:57:35.149340 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.928060704s)
	I0327 23:57:35.149393 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.695801618s)
	I0327 23:57:35.149424 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.438747064s)
	I0327 23:57:35.149495 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.017079691s)
	I0327 23:57:35.149611 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.792556906s)
	I0327 23:57:35.149730 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.61731294s)
	I0327 23:57:35.149797 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.105961734s)
	I0327 23:57:35.178948 1957985 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0327 23:57:35.179338 1957985 addons.go:470] Verifying addon registry=true in "addons-482679"
	I0327 23:57:35.186783 1957985 out.go:177] * Verifying registry addon...
	I0327 23:57:35.179509 1957985 addons.go:470] Verifying addon metrics-server=true in "addons-482679"
	W0327 23:57:35.179558 1957985 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 23:57:35.198136 1957985 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0327 23:57:35.199666 1957985 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-482679 service yakd-dashboard -n yakd-dashboard
	
	I0327 23:57:35.199832 1957985 retry.go:31] will retry after 186.444737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 23:57:35.230717 1957985 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 23:57:35.231674 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:35.231640 1957985 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0327 23:57:35.231761 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0327 23:57:35.243224 1957985 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0327 23:57:35.388741 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:57:35.688376 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:35.705521 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:36.200757 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:36.210021 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:36.578515 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.27797928s)
	I0327 23:57:36.578550 1957985 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-482679"
	I0327 23:57:36.580497 1957985 out.go:177] * Verifying csi-hostpath-driver addon...
	I0327 23:57:36.578750 1957985 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.626287685s)
	I0327 23:57:36.582879 1957985 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0327 23:57:36.584923 1957985 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:57:36.587098 1957985 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0327 23:57:36.589588 1957985 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 23:57:36.589610 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0327 23:57:36.595735 1957985 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 23:57:36.595764 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:36.623828 1957985 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 23:57:36.623853 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0327 23:57:36.662811 1957985 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 23:57:36.662836 1957985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0327 23:57:36.683540 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:36.704103 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:36.711522 1957985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 23:57:37.088490 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:37.180690 1957985 pod_ready.go:102] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"False"
	I0327 23:57:37.184333 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:37.204926 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:37.219216 1957985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.830422265s)
	I0327 23:57:37.618255 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:37.621379 1957985 addons.go:470] Verifying addon gcp-auth=true in "addons-482679"
	I0327 23:57:37.623385 1957985 out.go:177] * Verifying gcp-auth addon...
	I0327 23:57:37.627056 1957985 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0327 23:57:37.668288 1957985 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0327 23:57:37.668314 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:37.690236 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:37.706755 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:38.089726 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:38.133654 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:38.185878 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:38.204702 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:38.588421 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:38.630966 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:38.684055 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:38.704472 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:39.088427 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:39.131502 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:39.186694 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:39.187060 1957985 pod_ready.go:102] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"False"
	I0327 23:57:39.206210 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:39.588884 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:39.631666 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:39.681916 1957985 pod_ready.go:92] pod "coredns-76f75df574-g8vgw" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.681942 1957985 pod_ready.go:81] duration metric: took 9.507605067s for pod "coredns-76f75df574-g8vgw" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.681955 1957985 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vjc9g" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.685961 1957985 pod_ready.go:97] error getting pod "coredns-76f75df574-vjc9g" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-vjc9g" not found
	I0327 23:57:39.685987 1957985 pod_ready.go:81] duration metric: took 4.025155ms for pod "coredns-76f75df574-vjc9g" in "kube-system" namespace to be "Ready" ...
	E0327 23:57:39.685997 1957985 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-vjc9g" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-vjc9g" not found
	I0327 23:57:39.686005 1957985 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.687655 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:39.692381 1957985 pod_ready.go:92] pod "etcd-addons-482679" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.692409 1957985 pod_ready.go:81] duration metric: took 6.396847ms for pod "etcd-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.692423 1957985 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.698268 1957985 pod_ready.go:92] pod "kube-apiserver-addons-482679" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.698294 1957985 pod_ready.go:81] duration metric: took 5.863165ms for pod "kube-apiserver-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.698304 1957985 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.707313 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:39.707824 1957985 pod_ready.go:92] pod "kube-controller-manager-addons-482679" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.707849 1957985 pod_ready.go:81] duration metric: took 9.537462ms for pod "kube-controller-manager-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.707860 1957985 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-27xjv" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.879488 1957985 pod_ready.go:92] pod "kube-proxy-27xjv" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:39.879514 1957985 pod_ready.go:81] duration metric: took 171.645903ms for pod "kube-proxy-27xjv" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:39.879527 1957985 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:40.089793 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:40.130935 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:40.184716 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:40.204805 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:40.278675 1957985 pod_ready.go:92] pod "kube-scheduler-addons-482679" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:40.278750 1957985 pod_ready.go:81] duration metric: took 399.213739ms for pod "kube-scheduler-addons-482679" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:40.278777 1957985 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mrhg6" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:40.589640 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:40.631421 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:40.678303 1957985 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-mrhg6" in "kube-system" namespace has status "Ready":"True"
	I0327 23:57:40.678330 1957985 pod_ready.go:81] duration metric: took 399.531497ms for pod "nvidia-device-plugin-daemonset-mrhg6" in "kube-system" namespace to be "Ready" ...
	I0327 23:57:40.678340 1957985 pod_ready.go:38] duration metric: took 10.522998679s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:57:40.678357 1957985 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:57:40.678469 1957985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:57:40.683369 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:40.695240 1957985 api_server.go:72] duration metric: took 13.47165881s to wait for apiserver process to appear ...
	I0327 23:57:40.695266 1957985 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:57:40.695286 1957985 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 23:57:40.702817 1957985 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0327 23:57:40.703949 1957985 api_server.go:141] control plane version: v1.29.3
	I0327 23:57:40.703977 1957985 api_server.go:131] duration metric: took 8.703893ms to wait for apiserver health ...
	I0327 23:57:40.703987 1957985 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:57:40.707066 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:40.885191 1957985 system_pods.go:59] 18 kube-system pods found
	I0327 23:57:40.885229 1957985 system_pods.go:61] "coredns-76f75df574-g8vgw" [867c0aa5-1d07-4ff2-be4b-9ed7ce403871] Running
	I0327 23:57:40.885238 1957985 system_pods.go:61] "csi-hostpath-attacher-0" [daf424b4-f84b-4a6a-a011-4bfa22212b97] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 23:57:40.885247 1957985 system_pods.go:61] "csi-hostpath-resizer-0" [7f638b92-85f4-415f-a924-86650cfe8dfb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 23:57:40.885256 1957985 system_pods.go:61] "csi-hostpathplugin-6lzmh" [df3aa80c-fed2-47a2-b856-111e7de3128b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 23:57:40.885262 1957985 system_pods.go:61] "etcd-addons-482679" [f70527f9-6a15-4ab5-8778-24cee184984b] Running
	I0327 23:57:40.885267 1957985 system_pods.go:61] "kindnet-425ft" [54e31491-ad3f-4dab-8464-853f25f30101] Running
	I0327 23:57:40.885271 1957985 system_pods.go:61] "kube-apiserver-addons-482679" [1502c7af-0780-41fa-a22e-6c294015cca2] Running
	I0327 23:57:40.885283 1957985 system_pods.go:61] "kube-controller-manager-addons-482679" [7de1d5fa-2fc1-4214-8882-e6338a7d0b2c] Running
	I0327 23:57:40.885292 1957985 system_pods.go:61] "kube-ingress-dns-minikube" [ba1e77e0-74b3-4260-8083-f6d10de6cff7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0327 23:57:40.885303 1957985 system_pods.go:61] "kube-proxy-27xjv" [2251c29f-a21a-47c4-bbde-6205c6556081] Running
	I0327 23:57:40.885307 1957985 system_pods.go:61] "kube-scheduler-addons-482679" [6da7eaa9-257a-4896-afb6-860a4d96f8fe] Running
	I0327 23:57:40.885313 1957985 system_pods.go:61] "metrics-server-69cf46c98-txgn5" [e0e41c2b-b28d-474c-81f6-204fae8b58f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0327 23:57:40.885318 1957985 system_pods.go:61] "nvidia-device-plugin-daemonset-mrhg6" [aa83998b-3a9a-4746-abdb-f97f818000d6] Running
	I0327 23:57:40.885324 1957985 system_pods.go:61] "registry-c6js5" [f3baf2dc-389c-478c-8148-510e917e380b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0327 23:57:40.885330 1957985 system_pods.go:61] "registry-proxy-mgwql" [5d95c02e-cd9e-4a2f-8b0b-0e6e7f131536] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 23:57:40.885339 1957985 system_pods.go:61] "snapshot-controller-58dbcc7b99-j6jxb" [f5f560d1-85b1-4430-9933-76522bc5156a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:57:40.885347 1957985 system_pods.go:61] "snapshot-controller-58dbcc7b99-xf7gf" [90d05ce1-64eb-425c-a5c3-9cc16d6d5458] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:57:40.885356 1957985 system_pods.go:61] "storage-provisioner" [ceceb59c-5b5a-4651-83fe-54d165f371b0] Running
	I0327 23:57:40.885363 1957985 system_pods.go:74] duration metric: took 181.370433ms to wait for pod list to return data ...
	I0327 23:57:40.885379 1957985 default_sa.go:34] waiting for default service account to be created ...
	I0327 23:57:41.078236 1957985 default_sa.go:45] found service account: "default"
	I0327 23:57:41.078266 1957985 default_sa.go:55] duration metric: took 192.878418ms for default service account to be created ...
	I0327 23:57:41.078277 1957985 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 23:57:41.089828 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:41.131353 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:41.184086 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:41.205530 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:41.284836 1957985 system_pods.go:86] 18 kube-system pods found
	I0327 23:57:41.284869 1957985 system_pods.go:89] "coredns-76f75df574-g8vgw" [867c0aa5-1d07-4ff2-be4b-9ed7ce403871] Running
	I0327 23:57:41.284880 1957985 system_pods.go:89] "csi-hostpath-attacher-0" [daf424b4-f84b-4a6a-a011-4bfa22212b97] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 23:57:41.284887 1957985 system_pods.go:89] "csi-hostpath-resizer-0" [7f638b92-85f4-415f-a924-86650cfe8dfb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 23:57:41.284942 1957985 system_pods.go:89] "csi-hostpathplugin-6lzmh" [df3aa80c-fed2-47a2-b856-111e7de3128b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 23:57:41.284949 1957985 system_pods.go:89] "etcd-addons-482679" [f70527f9-6a15-4ab5-8778-24cee184984b] Running
	I0327 23:57:41.284959 1957985 system_pods.go:89] "kindnet-425ft" [54e31491-ad3f-4dab-8464-853f25f30101] Running
	I0327 23:57:41.284964 1957985 system_pods.go:89] "kube-apiserver-addons-482679" [1502c7af-0780-41fa-a22e-6c294015cca2] Running
	I0327 23:57:41.284968 1957985 system_pods.go:89] "kube-controller-manager-addons-482679" [7de1d5fa-2fc1-4214-8882-e6338a7d0b2c] Running
	I0327 23:57:41.284990 1957985 system_pods.go:89] "kube-ingress-dns-minikube" [ba1e77e0-74b3-4260-8083-f6d10de6cff7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0327 23:57:41.285003 1957985 system_pods.go:89] "kube-proxy-27xjv" [2251c29f-a21a-47c4-bbde-6205c6556081] Running
	I0327 23:57:41.285009 1957985 system_pods.go:89] "kube-scheduler-addons-482679" [6da7eaa9-257a-4896-afb6-860a4d96f8fe] Running
	I0327 23:57:41.285018 1957985 system_pods.go:89] "metrics-server-69cf46c98-txgn5" [e0e41c2b-b28d-474c-81f6-204fae8b58f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0327 23:57:41.285023 1957985 system_pods.go:89] "nvidia-device-plugin-daemonset-mrhg6" [aa83998b-3a9a-4746-abdb-f97f818000d6] Running
	I0327 23:57:41.285038 1957985 system_pods.go:89] "registry-c6js5" [f3baf2dc-389c-478c-8148-510e917e380b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0327 23:57:41.285045 1957985 system_pods.go:89] "registry-proxy-mgwql" [5d95c02e-cd9e-4a2f-8b0b-0e6e7f131536] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 23:57:41.285070 1957985 system_pods.go:89] "snapshot-controller-58dbcc7b99-j6jxb" [f5f560d1-85b1-4430-9933-76522bc5156a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:57:41.285091 1957985 system_pods.go:89] "snapshot-controller-58dbcc7b99-xf7gf" [90d05ce1-64eb-425c-a5c3-9cc16d6d5458] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:57:41.285108 1957985 system_pods.go:89] "storage-provisioner" [ceceb59c-5b5a-4651-83fe-54d165f371b0] Running
	I0327 23:57:41.285124 1957985 system_pods.go:126] duration metric: took 206.839589ms to wait for k8s-apps to be running ...
	I0327 23:57:41.285132 1957985 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 23:57:41.285208 1957985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:57:41.297420 1957985 system_svc.go:56] duration metric: took 12.278712ms WaitForService to wait for kubelet
	I0327 23:57:41.297450 1957985 kubeadm.go:576] duration metric: took 14.073873624s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:57:41.297472 1957985 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:57:41.478862 1957985 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 23:57:41.478898 1957985 node_conditions.go:123] node cpu capacity is 2
	I0327 23:57:41.478913 1957985 node_conditions.go:105] duration metric: took 181.435237ms to run NodePressure ...
	I0327 23:57:41.478947 1957985 start.go:240] waiting for startup goroutines ...
	I0327 23:57:41.589817 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:41.631066 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:41.684561 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:41.704678 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:42.093736 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:42.137208 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:42.189655 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:42.216503 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:42.589767 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:42.631531 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:42.684317 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:42.705154 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:43.089685 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:43.131794 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:43.184234 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:43.205398 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:43.590150 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:43.631937 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:43.684659 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:43.704123 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:44.091764 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:44.132215 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:44.183789 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:44.205389 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:44.591428 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:44.631103 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:44.683582 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:44.705498 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:45.099434 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:45.150086 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:45.187584 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:45.219200 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:45.588931 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:45.630540 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:45.684251 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:45.705509 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:46.089575 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:46.131460 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:46.184585 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:46.205838 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:46.590089 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:46.631326 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:46.684522 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:46.704275 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:47.089813 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:47.131566 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:47.184516 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:47.204961 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:47.589391 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:47.630805 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:47.683867 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:47.704315 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:48.088598 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:48.132660 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:48.184264 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:48.205122 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:48.591898 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:48.631327 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:48.684612 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:48.707063 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:49.089605 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:49.131522 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:49.184440 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:49.205407 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:49.589809 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:49.631738 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:49.684351 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:49.705455 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:50.090143 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:50.134163 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:50.191355 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:50.205859 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:50.589042 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:50.631373 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:50.684909 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:50.704229 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:51.088935 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:51.131068 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:51.184013 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:51.204887 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:57:51.588895 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:51.631209 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:51.684073 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:51.704810 1957985 kapi.go:107] duration metric: took 16.506672994s to wait for kubernetes.io/minikube-addons=registry ...
	I0327 23:57:52.089435 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:52.131101 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:52.184899 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:52.589103 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:52.631488 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:52.684695 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:53.089508 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:53.130810 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:53.186065 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:53.589432 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:53.631458 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:53.688210 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:54.089435 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:54.131407 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:54.183982 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:54.595815 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:54.641096 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:54.693899 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:55.090267 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:55.131826 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:55.185136 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:55.588826 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:55.631280 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:55.684519 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:56.089118 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:56.131615 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:56.184375 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:56.588991 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:56.630780 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:56.684805 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:57.089267 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:57.131330 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:57.184385 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:57.588407 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:57.631070 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:57.685063 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:58.089831 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:58.131442 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:58.184346 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:58.590038 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:58.631213 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:58.684086 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:59.089169 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:59.131636 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:59.184236 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:57:59.589365 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:57:59.631640 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:57:59.685328 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:00.093621 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:00.162082 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:00.246896 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:00.623347 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:00.630914 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:00.684121 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:01.089353 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:01.134350 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:01.190378 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:01.592549 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:01.630705 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:01.684036 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:02.090532 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:02.135285 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:02.184980 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:02.592689 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:02.633800 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:02.698700 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:03.088838 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:03.131293 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:03.189202 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:03.589060 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:03.631023 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:03.683818 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:04.089387 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:04.131446 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:04.183752 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:04.590420 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:04.631652 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:04.683767 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:05.089173 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:05.131714 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:05.184442 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:05.589201 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:05.641646 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:05.684671 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:06.089560 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:06.131645 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:06.184219 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:06.588844 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:06.638478 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:06.685385 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:07.089074 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:07.133530 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:07.184801 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:07.588207 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:07.634182 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:07.684232 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:08.088933 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:08.131609 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:08.184207 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:08.589372 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:08.631662 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:08.694051 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:09.094524 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:09.131042 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:09.185379 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:09.590870 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:09.632311 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:09.684635 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:10.094764 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:10.131390 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:10.184973 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:10.588708 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:10.631357 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:10.684939 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:11.093251 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:11.131120 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:11.183802 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:11.588432 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:11.631218 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:11.683707 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:12.089534 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:12.131603 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:12.184358 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:12.589220 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:12.630791 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:12.684409 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:13.089870 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:13.133395 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:13.184044 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:13.588812 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:13.632242 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:13.684187 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:14.090157 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:14.131480 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:14.184594 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:14.589039 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:14.630924 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:14.683607 1957985 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:58:15.089573 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:15.132383 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:15.184917 1957985 kapi.go:107] duration metric: took 40.005966245s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0327 23:58:15.589892 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:15.633868 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:16.089789 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:16.131634 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:16.588703 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:16.631328 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:17.090082 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:17.131811 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:17.589086 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:17.630966 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:58:18.088684 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:18.135691 1957985 kapi.go:107] duration metric: took 40.508633342s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0327 23:58:18.138955 1957985 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-482679 cluster.
	I0327 23:58:18.141002 1957985 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0327 23:58:18.143346 1957985 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0327 23:58:18.589454 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:19.087859 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:19.588349 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:20.089519 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:20.588850 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:21.089115 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:21.588657 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:22.088340 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:22.589325 1957985 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:58:23.089059 1957985 kapi.go:107] duration metric: took 46.506181182s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0327 23:58:23.091027 1957985 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0327 23:58:23.092956 1957985 addons.go:505] duration metric: took 55.86914339s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0327 23:58:23.093016 1957985 start.go:245] waiting for cluster config update ...
	I0327 23:58:23.093044 1957985 start.go:254] writing updated cluster config ...
	I0327 23:58:23.093330 1957985 ssh_runner.go:195] Run: rm -f paused
	I0327 23:58:23.421459 1957985 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 23:58:23.423220 1957985 out.go:177] * Done! kubectl is now configured to use "addons-482679" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	db1cb5fa8dec4       fc9db2894f4e4       1 second ago         Exited              helper-pod                0                   bdf163b57df87       helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479
	9e6e881a17b9e       46bd05c4a04f3       4 seconds ago        Exited              busybox                   0                   716cf51852b32       test-local-path
	4935da6960a04       fc9db2894f4e4       8 seconds ago        Exited              helper-pod                0                   1de9b148bd97b       helper-pod-create-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479
	f92a28d2b0d0f       dd1b12fcb6097       20 seconds ago       Exited              hello-world-app           2                   7a711e53fd8c0       hello-world-app-5d77478584-zdvkh
	96537110c042f       b8c82647e8a25       48 seconds ago       Running             nginx                     0                   d0fda75a2f88e       nginx
	83571fa875da7       6ef582f3ec844       About a minute ago   Running             gcp-auth                  0                   f178f1acc2749       gcp-auth-7d69788767-52t8m
	87b8f92a82a79       1a024e390dd05       About a minute ago   Exited              patch                     0                   9e5d21e49744a       ingress-nginx-admission-patch-5xfh6
	a81cce371563d       1a024e390dd05       About a minute ago   Exited              create                    0                   0711454d2c9b9       ingress-nginx-admission-create-f7l2t
	8c99736a4ba09       20e3f2db01e81       About a minute ago   Running             yakd                      0                   29d6ac951130a       yakd-dashboard-9947fc6bf-8j2mj
	db3736f924eaf       7ce2150c8929b       About a minute ago   Running             local-path-provisioner    0                   96b7a84a147e4       local-path-provisioner-78b46b4d5c-ddrpj
	2a35ab8a61e7c       2437cf7621777       2 minutes ago        Running             coredns                   0                   c962d24dc543d       coredns-76f75df574-g8vgw
	63306e1955da8       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   9eb531817361c       storage-provisioner
	3366eca273722       4740c1948d3fc       2 minutes ago        Running             kindnet-cni               0                   c86668c247585       kindnet-425ft
	48bf7168aaa65       0e9b4a0d1e86d       2 minutes ago        Running             kube-proxy                0                   e8ac5e3fe9404       kube-proxy-27xjv
	3bbf36bc6c552       4b51f9f6bc9b9       2 minutes ago        Running             kube-scheduler            0                   4fbec80ba189e       kube-scheduler-addons-482679
	0e9919976bff8       121d70d9a3805       2 minutes ago        Running             kube-controller-manager   0                   8fee3ecb1d006       kube-controller-manager-addons-482679
	b3e82d379c9aa       014faa467e297       2 minutes ago        Running             etcd                      0                   5509c048432a7       etcd-addons-482679
	def2c471b2a7f       2581114f5709d       2 minutes ago        Running             kube-apiserver            0                   1d10a952c6d6d       kube-apiserver-addons-482679
	
	
	==> containerd <==
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.173021590Z" level=warning msg="cleanup warnings time=\"2024-03-27T23:59:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9946 runtime=io.containerd.runc.v2\n"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.202702805Z" level=info msg="TearDown network for sandbox \"99dcb83b45862988a79ee3ec24119253ad5aadca4309476beeab87465d063f3c\" successfully"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.202748810Z" level=info msg="StopPodSandbox for \"99dcb83b45862988a79ee3ec24119253ad5aadca4309476beeab87465d063f3c\" returns successfully"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.253426973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479,Uid:3d309f4e-6ee4-4202-bead-938e749ac56f,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"bdf163b57df87ac1a9d2bdaa0e2d314f6e167a00372d9805162056221429a913\""
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.268291670Z" level=info msg="CreateContainer within sandbox \"bdf163b57df87ac1a9d2bdaa0e2d314f6e167a00372d9805162056221429a913\" for container &ContainerMetadata{Name:helper-pod,Attempt:0,}"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.286075380Z" level=info msg="CreateContainer within sandbox \"bdf163b57df87ac1a9d2bdaa0e2d314f6e167a00372d9805162056221429a913\" for &ContainerMetadata{Name:helper-pod,Attempt:0,} returns container id \"db1cb5fa8dec485e981cfa17582265360b251014bc0a5d4e002570aecbba4041\""
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.293236407Z" level=info msg="StartContainer for \"db1cb5fa8dec485e981cfa17582265360b251014bc0a5d4e002570aecbba4041\""
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.382379971Z" level=info msg="StartContainer for \"db1cb5fa8dec485e981cfa17582265360b251014bc0a5d4e002570aecbba4041\" returns successfully"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.409838989Z" level=info msg="shim disconnected" id=db1cb5fa8dec485e981cfa17582265360b251014bc0a5d4e002570aecbba4041
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.410024817Z" level=warning msg="cleaning up after shim disconnected" id=db1cb5fa8dec485e981cfa17582265360b251014bc0a5d4e002570aecbba4041 namespace=k8s.io
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.410041293Z" level=info msg="cleaning up dead shim"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.419981199Z" level=warning msg="cleanup warnings time=\"2024-03-27T23:59:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10076 runtime=io.containerd.runc.v2\n"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.472362635Z" level=info msg="RemoveContainer for \"e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a\""
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.483847187Z" level=info msg="RemoveContainer for \"e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a\" returns successfully"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.485554909Z" level=error msg="ContainerStatus for \"e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a\": not found"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.598925841Z" level=info msg="StopContainer for \"db3736f924eaf6b19c7350dc8f1dcfd5e53cb5f6ded3a0d23e29b534c0ffc939\" with timeout 30 (s)"
	Mar 27 23:59:46 addons-482679 containerd[761]: time="2024-03-27T23:59:46.599669196Z" level=info msg="Stop container \"db3736f924eaf6b19c7350dc8f1dcfd5e53cb5f6ded3a0d23e29b534c0ffc939\" with signal terminated"
	Mar 27 23:59:47 addons-482679 containerd[761]: time="2024-03-27T23:59:47.477386612Z" level=info msg="StopPodSandbox for \"bdf163b57df87ac1a9d2bdaa0e2d314f6e167a00372d9805162056221429a913\""
	Mar 27 23:59:47 addons-482679 containerd[761]: time="2024-03-27T23:59:47.477448117Z" level=info msg="Container to stop \"db1cb5fa8dec485e981cfa17582265360b251014bc0a5d4e002570aecbba4041\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 27 23:59:47 addons-482679 containerd[761]: time="2024-03-27T23:59:47.513387809Z" level=info msg="shim disconnected" id=bdf163b57df87ac1a9d2bdaa0e2d314f6e167a00372d9805162056221429a913
	Mar 27 23:59:47 addons-482679 containerd[761]: time="2024-03-27T23:59:47.513547643Z" level=warning msg="cleaning up after shim disconnected" id=bdf163b57df87ac1a9d2bdaa0e2d314f6e167a00372d9805162056221429a913 namespace=k8s.io
	Mar 27 23:59:47 addons-482679 containerd[761]: time="2024-03-27T23:59:47.513560755Z" level=info msg="cleaning up dead shim"
	Mar 27 23:59:47 addons-482679 containerd[761]: time="2024-03-27T23:59:47.528139858Z" level=warning msg="cleanup warnings time=\"2024-03-27T23:59:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10158 runtime=io.containerd.runc.v2\ntime=\"2024-03-27T23:59:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
	Mar 27 23:59:47 addons-482679 containerd[761]: time="2024-03-27T23:59:47.560874443Z" level=info msg="TearDown network for sandbox \"bdf163b57df87ac1a9d2bdaa0e2d314f6e167a00372d9805162056221429a913\" successfully"
	Mar 27 23:59:47 addons-482679 containerd[761]: time="2024-03-27T23:59:47.560919620Z" level=info msg="StopPodSandbox for \"bdf163b57df87ac1a9d2bdaa0e2d314f6e167a00372d9805162056221429a913\" returns successfully"
	
	
	==> coredns [2a35ab8a61e7c79c31adadc4b51316c6cdb4b0452dbe11c81fedb4b18115ca0b] <==
	[INFO] 10.244.0.19:42842 - 59084 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062802s
	[INFO] 10.244.0.19:42842 - 9694 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070317s
	[INFO] 10.244.0.19:56186 - 11385 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00251108s
	[INFO] 10.244.0.19:42842 - 42528 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001297863s
	[INFO] 10.244.0.19:56186 - 22905 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000162534s
	[INFO] 10.244.0.19:42842 - 57273 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001066528s
	[INFO] 10.244.0.19:42842 - 27836 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000108766s
	[INFO] 10.244.0.19:45096 - 6514 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109948s
	[INFO] 10.244.0.19:38296 - 30539 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056656s
	[INFO] 10.244.0.19:38296 - 42076 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000131478s
	[INFO] 10.244.0.19:45096 - 59091 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064245s
	[INFO] 10.244.0.19:38296 - 39562 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000086572s
	[INFO] 10.244.0.19:45096 - 58255 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047786s
	[INFO] 10.244.0.19:45096 - 39124 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046563s
	[INFO] 10.244.0.19:38296 - 9214 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051569s
	[INFO] 10.244.0.19:45096 - 19296 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047573s
	[INFO] 10.244.0.19:38296 - 7191 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050297s
	[INFO] 10.244.0.19:45096 - 41448 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047786s
	[INFO] 10.244.0.19:38296 - 41843 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049206s
	[INFO] 10.244.0.19:45096 - 56446 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001519087s
	[INFO] 10.244.0.19:38296 - 57806 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001866597s
	[INFO] 10.244.0.19:45096 - 20499 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001129248s
	[INFO] 10.244.0.19:45096 - 24967 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000082165s
	[INFO] 10.244.0.19:38296 - 28149 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002116498s
	[INFO] 10.244.0.19:38296 - 12514 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000519717s
	
	
	==> describe nodes <==
	Name:               addons-482679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-482679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873
	                    minikube.k8s.io/name=addons-482679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T23_57_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-482679
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:57:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-482679
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 23:59:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:59:47 +0000   Wed, 27 Mar 2024 23:57:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:59:47 +0000   Wed, 27 Mar 2024 23:57:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:59:47 +0000   Wed, 27 Mar 2024 23:57:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:59:47 +0000   Wed, 27 Mar 2024 23:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-482679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb86c9df091e457f89e51ed729c30ce0
	  System UUID:                88d9ae12-e4f8-402d-963c-f5713b48548d
	  Boot ID:                    561aadd0-a15d-4e78-9187-a38c38772b44
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zdvkh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  gcp-auth                    gcp-auth-7d69788767-52t8m                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 coredns-76f75df574-g8vgw                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m21s
	  kube-system                 etcd-addons-482679                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m34s
	  kube-system                 kindnet-425ft                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m22s
	  kube-system                 kube-apiserver-addons-482679               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-controller-manager-addons-482679      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-proxy-27xjv                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-scheduler-addons-482679               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  local-path-storage          local-path-provisioner-78b46b4d5c-ddrpj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-8j2mj             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m19s                  kube-proxy       
	  Normal  Starting                 2m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m42s (x8 over 2m42s)  kubelet          Node addons-482679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s (x8 over 2m42s)  kubelet          Node addons-482679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s (x7 over 2m42s)  kubelet          Node addons-482679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m34s                  kubelet          Node addons-482679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s                  kubelet          Node addons-482679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s                  kubelet          Node addons-482679 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m34s                  kubelet          Node addons-482679 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m34s                  kubelet          Node addons-482679 status is now: NodeReady
	  Normal  RegisteredNode           2m22s                  node-controller  Node addons-482679 event: Registered Node addons-482679 in Controller
	
	
	==> dmesg <==
	[  +0.000944] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000abecea9f
	[  +0.001109] FS-Cache: N-key=[8] 'e2425c0100000000'
	[  +0.002757] FS-Cache: Duplicate cookie detected
	[  +0.000760] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001099] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=0000000065f36b62
	[  +0.001116] FS-Cache: O-key=[8] 'e2425c0100000000'
	[  +0.000756] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000b46e9eef
	[  +0.001155] FS-Cache: N-key=[8] 'e2425c0100000000'
	[  +1.587855] FS-Cache: Duplicate cookie detected
	[  +0.000858] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=00000000570a01e0
	[  +0.001107] FS-Cache: O-key=[8] 'e1425c0100000000'
	[  +0.000881] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001094] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=0000000004f60fee
	[  +0.001050] FS-Cache: N-key=[8] 'e1425c0100000000'
	[  +0.278051] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000950] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=00000000a0dd260d
	[  +0.001058] FS-Cache: O-key=[8] 'e7425c0100000000'
	[  +0.000856] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.001042] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000e23f78d6
	[  +0.001095] FS-Cache: N-key=[8] 'e7425c0100000000'
	[Mar27 23:23] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [b3e82d379c9aa86026f058d6ef6fa6a500bd426fd6b17584eb588f27c9d1f7a3] <==
	{"level":"info","ts":"2024-03-27T23:57:07.576055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-27T23:57:07.576375Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-27T23:57:07.582272Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-27T23:57:07.582556Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-27T23:57:07.582568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-27T23:57:07.591729Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-27T23:57:07.591781Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-27T23:57:08.141962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-27T23:57:08.142085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-27T23:57:08.142121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-27T23:57:08.142174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-27T23:57:08.142214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-27T23:57:08.142274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-27T23:57:08.142314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-27T23:57:08.150118Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-482679 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T23:57:08.150225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:57:08.150552Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:57:08.162592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-27T23:57:08.162934Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:57:08.163014Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:57:08.181995Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:57:08.182634Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T23:57:08.182733Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T23:57:08.170769Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:57:08.184827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [83571fa875da70d3885554317fa46b9d3c20072e6274ec4e42bbfae96029ac08] <==
	2024/03/27 23:58:17 GCP Auth Webhook started!
	2024/03/27 23:58:34 Ready to marshal response ...
	2024/03/27 23:58:34 Ready to write response ...
	2024/03/27 23:58:47 Ready to marshal response ...
	2024/03/27 23:58:47 Ready to write response ...
	2024/03/27 23:58:57 Ready to marshal response ...
	2024/03/27 23:58:57 Ready to write response ...
	2024/03/27 23:59:07 Ready to marshal response ...
	2024/03/27 23:59:07 Ready to write response ...
	2024/03/27 23:59:15 Ready to marshal response ...
	2024/03/27 23:59:15 Ready to write response ...
	2024/03/27 23:59:37 Ready to marshal response ...
	2024/03/27 23:59:37 Ready to write response ...
	2024/03/27 23:59:37 Ready to marshal response ...
	2024/03/27 23:59:37 Ready to write response ...
	2024/03/27 23:59:45 Ready to marshal response ...
	2024/03/27 23:59:45 Ready to write response ...
	
	
	==> kernel <==
	 23:59:48 up  7:42,  0 users,  load average: 2.44, 2.54, 3.01
	Linux addons-482679 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3366eca273722b9ac99d067d5680385bdcef399d0e6bd8167537433aa723d237] <==
	I0327 23:57:38.990172       1 main.go:227] handling current node
	I0327 23:57:49.004891       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:57:49.004923       1 main.go:227] handling current node
	I0327 23:57:59.018034       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:57:59.018060       1 main.go:227] handling current node
	I0327 23:58:09.022521       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:09.022548       1 main.go:227] handling current node
	I0327 23:58:19.028486       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:19.028515       1 main.go:227] handling current node
	I0327 23:58:29.032741       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:29.032769       1 main.go:227] handling current node
	I0327 23:58:39.046030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:39.046065       1 main.go:227] handling current node
	I0327 23:58:49.059710       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:49.059739       1 main.go:227] handling current node
	I0327 23:58:59.063769       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:58:59.063797       1 main.go:227] handling current node
	I0327 23:59:09.074040       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:59:09.074068       1 main.go:227] handling current node
	I0327 23:59:19.086731       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:59:19.086760       1 main.go:227] handling current node
	I0327 23:59:29.094098       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:59:29.094130       1 main.go:227] handling current node
	I0327 23:59:39.105852       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 23:59:39.105891       1 main.go:227] handling current node
	
	
	==> kube-apiserver [def2c471b2a7f41c9953ec79ec1dd19d280412a4b314562fe9930012750407d9] <==
	W0327 23:58:02.697227       1 handler_proxy.go:93] no RequestInfo found in the context
	E0327 23:58:02.697302       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0327 23:58:02.748570       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0327 23:58:51.551008       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0327 23:58:52.618069       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0327 23:58:56.121148       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0327 23:58:57.120873       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0327 23:58:57.571210       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.119.125"}
	E0327 23:58:58.102971       1 watch.go:253] http2: stream closed
	I0327 23:59:03.706937       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0327 23:59:07.302974       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.175.5"}
	I0327 23:59:31.223605       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.223646       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:59:31.257729       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.258032       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:59:31.274471       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.274942       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:59:31.303241       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.303296       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:59:31.314232       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:59:31.314288       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0327 23:59:32.275441       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0327 23:59:32.322165       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0327 23:59:32.332810       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [0e9919976bff8676f14254f57114a57c427cbe952df2be16459f9dc54e130dd0] <==
	E0327 23:59:32.334588       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:33.398053       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:33.398087       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:33.482094       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:33.482130       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:33.917553       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:33.917586       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 23:59:35.321012       1 namespace_controller.go:182] "Namespace has been deleted" namespace="ingress-nginx"
	W0327 23:59:35.422297       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:35.422332       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:35.978763       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:35.978800       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:36.343263       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:36.343296       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 23:59:37.646709       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0327 23:59:37.822344       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0327 23:59:38.817253       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:38.817291       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:39.986003       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:39.986036       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:59:40.997239       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:59:40.997278       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 23:59:42.067639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.579µs"
	I0327 23:59:45.929078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5446596998" duration="4.497µs"
	I0327 23:59:46.573045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="4.824µs"
	
	
	==> kube-proxy [48bf7168aaa6505e6745f9880d4715e768b65da521579710142f84feeb41824d] <==
	I0327 23:57:28.605763       1 server_others.go:72] "Using iptables proxy"
	I0327 23:57:28.619056       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0327 23:57:28.646608       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0327 23:57:28.646645       1 server_others.go:168] "Using iptables Proxier"
	I0327 23:57:28.648424       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0327 23:57:28.648440       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0327 23:57:28.648472       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 23:57:28.648704       1 server.go:865] "Version info" version="v1.29.3"
	I0327 23:57:28.648715       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:57:28.650224       1 config.go:188] "Starting service config controller"
	I0327 23:57:28.650247       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 23:57:28.650278       1 config.go:97] "Starting endpoint slice config controller"
	I0327 23:57:28.650283       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 23:57:28.652797       1 config.go:315] "Starting node config controller"
	I0327 23:57:28.652816       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 23:57:28.750402       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 23:57:28.750460       1 shared_informer.go:318] Caches are synced for service config
	I0327 23:57:28.753376       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3bbf36bc6c552b04b3ca087a3c2269b74da8d1c351a786b59ab98b4b90efc359] <==
	W0327 23:57:11.406372       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 23:57:11.406428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 23:57:11.406511       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:57:11.406531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:57:11.406670       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 23:57:11.406690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 23:57:11.406768       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 23:57:11.406787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 23:57:11.406925       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 23:57:11.406945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 23:57:11.407105       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 23:57:11.407127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 23:57:11.407638       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 23:57:11.407665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 23:57:11.407749       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 23:57:11.407771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 23:57:11.407948       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 23:57:11.407970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 23:57:11.408136       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 23:57:11.408158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 23:57:11.408307       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 23:57:11.408329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 23:57:11.408916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 23:57:11.408941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0327 23:57:12.999158       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 23:59:45 addons-482679 kubelet[1513]: I0327 23:59:45.787731    1513 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3d309f4e-6ee4-4202-bead-938e749ac56f-data\") pod \"helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479\" (UID: \"3d309f4e-6ee4-4202-bead-938e749ac56f\") " pod="local-path-storage/helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479"
	Mar 27 23:59:45 addons-482679 kubelet[1513]: I0327 23:59:45.787784    1513 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtww\" (UniqueName: \"kubernetes.io/projected/3d309f4e-6ee4-4202-bead-938e749ac56f-kube-api-access-kgtww\") pod \"helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479\" (UID: \"3d309f4e-6ee4-4202-bead-938e749ac56f\") " pod="local-path-storage/helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479"
	Mar 27 23:59:45 addons-482679 kubelet[1513]: I0327 23:59:45.787897    1513 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3d309f4e-6ee4-4202-bead-938e749ac56f-script\") pod \"helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479\" (UID: \"3d309f4e-6ee4-4202-bead-938e749ac56f\") " pod="local-path-storage/helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479"
	Mar 27 23:59:45 addons-482679 kubelet[1513]: I0327 23:59:45.787940    1513 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3d309f4e-6ee4-4202-bead-938e749ac56f-gcp-creds\") pod \"helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479\" (UID: \"3d309f4e-6ee4-4202-bead-938e749ac56f\") " pod="local-path-storage/helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479"
	Mar 27 23:59:46 addons-482679 kubelet[1513]: I0327 23:59:46.055107    1513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2863a11-4b44-44a8-b262-9feb8839b92d" path="/var/lib/kubelet/pods/f2863a11-4b44-44a8-b262-9feb8839b92d/volumes"
	Mar 27 23:59:46 addons-482679 kubelet[1513]: I0327 23:59:46.294388    1513 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g45q8\" (UniqueName: \"kubernetes.io/projected/90db4a3b-381a-4a28-b557-2e8254c3395a-kube-api-access-g45q8\") pod \"90db4a3b-381a-4a28-b557-2e8254c3395a\" (UID: \"90db4a3b-381a-4a28-b557-2e8254c3395a\") "
	Mar 27 23:59:46 addons-482679 kubelet[1513]: I0327 23:59:46.296343    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90db4a3b-381a-4a28-b557-2e8254c3395a-kube-api-access-g45q8" (OuterVolumeSpecName: "kube-api-access-g45q8") pod "90db4a3b-381a-4a28-b557-2e8254c3395a" (UID: "90db4a3b-381a-4a28-b557-2e8254c3395a"). InnerVolumeSpecName "kube-api-access-g45q8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 23:59:46 addons-482679 kubelet[1513]: I0327 23:59:46.395525    1513 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g45q8\" (UniqueName: \"kubernetes.io/projected/90db4a3b-381a-4a28-b557-2e8254c3395a-kube-api-access-g45q8\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:46 addons-482679 kubelet[1513]: I0327 23:59:46.467391    1513 scope.go:117] "RemoveContainer" containerID="e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a"
	Mar 27 23:59:46 addons-482679 kubelet[1513]: I0327 23:59:46.484257    1513 scope.go:117] "RemoveContainer" containerID="e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a"
	Mar 27 23:59:46 addons-482679 kubelet[1513]: E0327 23:59:46.486130    1513 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a\": not found" containerID="e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a"
	Mar 27 23:59:46 addons-482679 kubelet[1513]: I0327 23:59:46.486211    1513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a"} err="failed to get container status \"e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e53c41995565f620bfe34e80c3e478cfb49559201899c78f433c23399a48a96a\": not found"
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.704350    1513 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgtww\" (UniqueName: \"kubernetes.io/projected/3d309f4e-6ee4-4202-bead-938e749ac56f-kube-api-access-kgtww\") pod \"3d309f4e-6ee4-4202-bead-938e749ac56f\" (UID: \"3d309f4e-6ee4-4202-bead-938e749ac56f\") "
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.704433    1513 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3d309f4e-6ee4-4202-bead-938e749ac56f-script\") pod \"3d309f4e-6ee4-4202-bead-938e749ac56f\" (UID: \"3d309f4e-6ee4-4202-bead-938e749ac56f\") "
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.704473    1513 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3d309f4e-6ee4-4202-bead-938e749ac56f-gcp-creds\") pod \"3d309f4e-6ee4-4202-bead-938e749ac56f\" (UID: \"3d309f4e-6ee4-4202-bead-938e749ac56f\") "
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.704520    1513 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3d309f4e-6ee4-4202-bead-938e749ac56f-data\") pod \"3d309f4e-6ee4-4202-bead-938e749ac56f\" (UID: \"3d309f4e-6ee4-4202-bead-938e749ac56f\") "
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.704647    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d309f4e-6ee4-4202-bead-938e749ac56f-data" (OuterVolumeSpecName: "data") pod "3d309f4e-6ee4-4202-bead-938e749ac56f" (UID: "3d309f4e-6ee4-4202-bead-938e749ac56f"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.705508    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d309f4e-6ee4-4202-bead-938e749ac56f-script" (OuterVolumeSpecName: "script") pod "3d309f4e-6ee4-4202-bead-938e749ac56f" (UID: "3d309f4e-6ee4-4202-bead-938e749ac56f"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.705583    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d309f4e-6ee4-4202-bead-938e749ac56f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "3d309f4e-6ee4-4202-bead-938e749ac56f" (UID: "3d309f4e-6ee4-4202-bead-938e749ac56f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.709006    1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d309f4e-6ee4-4202-bead-938e749ac56f-kube-api-access-kgtww" (OuterVolumeSpecName: "kube-api-access-kgtww") pod "3d309f4e-6ee4-4202-bead-938e749ac56f" (UID: "3d309f4e-6ee4-4202-bead-938e749ac56f"). InnerVolumeSpecName "kube-api-access-kgtww". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.805020    1513 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3d309f4e-6ee4-4202-bead-938e749ac56f-script\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.805289    1513 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3d309f4e-6ee4-4202-bead-938e749ac56f-gcp-creds\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.805318    1513 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3d309f4e-6ee4-4202-bead-938e749ac56f-data\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:47 addons-482679 kubelet[1513]: I0327 23:59:47.805479    1513 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kgtww\" (UniqueName: \"kubernetes.io/projected/3d309f4e-6ee4-4202-bead-938e749ac56f-kube-api-access-kgtww\") on node \"addons-482679\" DevicePath \"\""
	Mar 27 23:59:48 addons-482679 kubelet[1513]: I0327 23:59:48.051884    1513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90db4a3b-381a-4a28-b557-2e8254c3395a" path="/var/lib/kubelet/pods/90db4a3b-381a-4a28-b557-2e8254c3395a/volumes"
	
	
	==> storage-provisioner [63306e1955da83a00baddf978af7b1509abe2edf5d97f62bab1b55d5543e0acb] <==
	I0327 23:57:34.068993       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 23:57:34.113332       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 23:57:34.113371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 23:57:34.125857       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 23:57:34.126079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-482679_38ceff6b-df1c-4724-a410-a20da8d23b6a!
	I0327 23:57:34.127007       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f91cafc-f89e-4226-8ddb-30b6386c3c0a", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-482679_38ceff6b-df1c-4724-a410-a20da8d23b6a became leader
	I0327 23:57:34.226465       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-482679_38ceff6b-df1c-4724-a410-a20da8d23b6a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-482679 -n addons-482679
helpers_test.go:261: (dbg) Run:  kubectl --context addons-482679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-482679 describe pod helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-482679 describe pod helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479: exit status 1 (97.817083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-482679 describe pod helper-pod-delete-pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image load --daemon gcr.io/google-containers/addon-resizer:functional-197628 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 image load --daemon gcr.io/google-containers/addon-resizer:functional-197628 --alsologtostderr: (4.32689716s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-197628" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image load --daemon gcr.io/google-containers/addon-resizer:functional-197628 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 image load --daemon gcr.io/google-containers/addon-resizer:functional-197628 --alsologtostderr: (3.25212635s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-197628" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.676820811s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-197628
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image load --daemon gcr.io/google-containers/addon-resizer:functional-197628 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 image load --daemon gcr.io/google-containers/addon-resizer:functional-197628 --alsologtostderr: (3.09164378s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-197628" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image save gcr.io/google-containers/addon-resizer:functional-197628 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0328 00:05:55.930631 1990911 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:05:55.931768 1990911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:05:55.931806 1990911 out.go:304] Setting ErrFile to fd 2...
	I0328 00:05:55.931830 1990911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:05:55.932128 1990911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:05:55.932793 1990911 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:05:55.932959 1990911 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:05:55.933461 1990911 cli_runner.go:164] Run: docker container inspect functional-197628 --format={{.State.Status}}
	I0328 00:05:55.949098 1990911 ssh_runner.go:195] Run: systemctl --version
	I0328 00:05:55.949207 1990911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-197628
	I0328 00:05:55.964658 1990911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35054 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/functional-197628/id_rsa Username:docker}
	I0328 00:05:56.054472 1990911 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0328 00:05:56.054544 1990911 cache_images.go:254] Failed to load cached images for profile functional-197628. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0328 00:05:56.054565 1990911 cache_images.go:262] succeeded pushing to: 
	I0328 00:05:56.054577 1990911 cache_images.go:263] failed pushing to: functional-197628

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-847679 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0328 00:43:23.479132 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-847679 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m10.70958718s)

                                                
                                                
-- stdout --
	* [old-k8s-version-847679] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-847679" primary control-plane node in "old-k8s-version-847679" cluster
	* Pulling base image v0.0.43-beta.0 ...
	* Restarting existing docker container for "old-k8s-version-847679" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-847679 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:43:06.217429 2154103 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:43:06.217743 2154103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:43:06.217773 2154103 out.go:304] Setting ErrFile to fd 2...
	I0328 00:43:06.217792 2154103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:43:06.218169 2154103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:43:06.218581 2154103 out.go:298] Setting JSON to false
	I0328 00:43:06.219642 2154103 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30324,"bootTime":1711556262,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 00:43:06.219735 2154103 start.go:139] virtualization:  
	I0328 00:43:06.222473 2154103 out.go:177] * [old-k8s-version-847679] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 00:43:06.225283 2154103 out.go:177]   - MINIKUBE_LOCATION=18158
	I0328 00:43:06.227843 2154103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:43:06.225342 2154103 notify.go:220] Checking for updates...
	I0328 00:43:06.230498 2154103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0328 00:43:06.232384 2154103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0328 00:43:06.234029 2154103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 00:43:06.235916 2154103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:43:06.238163 2154103 config.go:182] Loaded profile config "old-k8s-version-847679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0328 00:43:06.240583 2154103 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 00:43:06.242269 2154103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:43:06.264892 2154103 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 00:43:06.265019 2154103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:43:06.367238 2154103 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:67 SystemTime:2024-03-28 00:43:06.356279087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:43:06.367344 2154103 docker.go:295] overlay module found
	I0328 00:43:06.371621 2154103 out.go:177] * Using the docker driver based on existing profile
	I0328 00:43:06.373721 2154103 start.go:297] selected driver: docker
	I0328 00:43:06.373758 2154103 start.go:901] validating driver "docker" against &{Name:old-k8s-version-847679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847679 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:43:06.373880 2154103 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:43:06.374578 2154103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:43:06.468865 2154103 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:67 SystemTime:2024-03-28 00:43:06.454458297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:43:06.469211 2154103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:43:06.469271 2154103 cni.go:84] Creating CNI manager for ""
	I0328 00:43:06.469289 2154103 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 00:43:06.469331 2154103 start.go:340] cluster config:
	{Name:old-k8s-version-847679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:43:06.472359 2154103 out.go:177] * Starting "old-k8s-version-847679" primary control-plane node in "old-k8s-version-847679" cluster
	I0328 00:43:06.474205 2154103 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 00:43:06.476068 2154103 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0328 00:43:06.477857 2154103 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0328 00:43:06.477944 2154103 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0328 00:43:06.477983 2154103 cache.go:56] Caching tarball of preloaded images
	I0328 00:43:06.478069 2154103 preload.go:173] Found /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 00:43:06.478079 2154103 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0328 00:43:06.478187 2154103 profile.go:142] Saving config to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/config.json ...
	I0328 00:43:06.478394 2154103 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0328 00:43:06.522005 2154103 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon, skipping pull
	I0328 00:43:06.522047 2154103 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in daemon, skipping load
	I0328 00:43:06.522062 2154103 cache.go:194] Successfully downloaded all kic artifacts
	I0328 00:43:06.522090 2154103 start.go:360] acquireMachinesLock for old-k8s-version-847679: {Name:mkdccd8cfab342a8c65a06a3491156990e31e4be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:43:06.522167 2154103 start.go:364] duration metric: took 51.348µs to acquireMachinesLock for "old-k8s-version-847679"
	I0328 00:43:06.522196 2154103 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:43:06.522206 2154103 fix.go:54] fixHost starting: 
	I0328 00:43:06.522510 2154103 cli_runner.go:164] Run: docker container inspect old-k8s-version-847679 --format={{.State.Status}}
	I0328 00:43:06.554007 2154103 fix.go:112] recreateIfNeeded on old-k8s-version-847679: state=Stopped err=<nil>
	W0328 00:43:06.554040 2154103 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:43:06.556392 2154103 out.go:177] * Restarting existing docker container for "old-k8s-version-847679" ...
	I0328 00:43:06.558142 2154103 cli_runner.go:164] Run: docker start old-k8s-version-847679
	I0328 00:43:06.898248 2154103 cli_runner.go:164] Run: docker container inspect old-k8s-version-847679 --format={{.State.Status}}
	I0328 00:43:06.937328 2154103 kic.go:430] container "old-k8s-version-847679" state is running.
	I0328 00:43:06.937723 2154103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-847679
	I0328 00:43:06.966239 2154103 profile.go:142] Saving config to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/config.json ...
	I0328 00:43:06.966467 2154103 machine.go:94] provisionDockerMachine start ...
	I0328 00:43:06.966528 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:07.013206 2154103 main.go:141] libmachine: Using SSH client type: native
	I0328 00:43:07.013473 2154103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35334 <nil> <nil>}
	I0328 00:43:07.013482 2154103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:43:07.015565 2154103 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0328 00:43:10.145649 2154103 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847679
	
	I0328 00:43:10.145738 2154103 ubuntu.go:169] provisioning hostname "old-k8s-version-847679"
	I0328 00:43:10.145839 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:10.168079 2154103 main.go:141] libmachine: Using SSH client type: native
	I0328 00:43:10.168417 2154103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35334 <nil> <nil>}
	I0328 00:43:10.168429 2154103 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847679 && echo "old-k8s-version-847679" | sudo tee /etc/hostname
	I0328 00:43:10.322538 2154103 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847679
	
	I0328 00:43:10.322645 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:10.343289 2154103 main.go:141] libmachine: Using SSH client type: native
	I0328 00:43:10.343536 2154103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35334 <nil> <nil>}
	I0328 00:43:10.343563 2154103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847679/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:43:10.478334 2154103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:43:10.478404 2154103 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18158-1951721/.minikube CaCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18158-1951721/.minikube}
	I0328 00:43:10.478444 2154103 ubuntu.go:177] setting up certificates
	I0328 00:43:10.478486 2154103 provision.go:84] configureAuth start
	I0328 00:43:10.478572 2154103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-847679
	I0328 00:43:10.499644 2154103 provision.go:143] copyHostCerts
	I0328 00:43:10.499712 2154103 exec_runner.go:144] found /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.pem, removing ...
	I0328 00:43:10.499729 2154103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.pem
	I0328 00:43:10.499801 2154103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.pem (1078 bytes)
	I0328 00:43:10.499935 2154103 exec_runner.go:144] found /home/jenkins/minikube-integration/18158-1951721/.minikube/cert.pem, removing ...
	I0328 00:43:10.499941 2154103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18158-1951721/.minikube/cert.pem
	I0328 00:43:10.499969 2154103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/cert.pem (1123 bytes)
	I0328 00:43:10.500062 2154103 exec_runner.go:144] found /home/jenkins/minikube-integration/18158-1951721/.minikube/key.pem, removing ...
	I0328 00:43:10.500068 2154103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18158-1951721/.minikube/key.pem
	I0328 00:43:10.500099 2154103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/key.pem (1675 bytes)
	I0328 00:43:10.500178 2154103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847679 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-847679]
	I0328 00:43:11.299951 2154103 provision.go:177] copyRemoteCerts
	I0328 00:43:11.300028 2154103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:43:11.300081 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:11.316259 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:11.413312 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:43:11.442986 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 00:43:11.473572 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:43:11.503803 2154103 provision.go:87] duration metric: took 1.025285953s to configureAuth
	I0328 00:43:11.503853 2154103 ubuntu.go:193] setting minikube options for container-runtime
	I0328 00:43:11.504118 2154103 config.go:182] Loaded profile config "old-k8s-version-847679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0328 00:43:11.504144 2154103 machine.go:97] duration metric: took 4.537668369s to provisionDockerMachine
	I0328 00:43:11.504156 2154103 start.go:293] postStartSetup for "old-k8s-version-847679" (driver="docker")
	I0328 00:43:11.504180 2154103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:43:11.504256 2154103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:43:11.504318 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:11.523702 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:11.616957 2154103 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:43:11.620485 2154103 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 00:43:11.620516 2154103 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 00:43:11.620528 2154103 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 00:43:11.620535 2154103 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 00:43:11.620545 2154103 filesync.go:126] Scanning /home/jenkins/minikube-integration/18158-1951721/.minikube/addons for local assets ...
	I0328 00:43:11.620603 2154103 filesync.go:126] Scanning /home/jenkins/minikube-integration/18158-1951721/.minikube/files for local assets ...
	I0328 00:43:11.620683 2154103 filesync.go:149] local asset: /home/jenkins/minikube-integration/18158-1951721/.minikube/files/etc/ssl/certs/19571412.pem -> 19571412.pem in /etc/ssl/certs
	I0328 00:43:11.620785 2154103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:43:11.630061 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/files/etc/ssl/certs/19571412.pem --> /etc/ssl/certs/19571412.pem (1708 bytes)
	I0328 00:43:11.656874 2154103 start.go:296] duration metric: took 152.665617ms for postStartSetup
	I0328 00:43:11.656961 2154103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:43:11.657009 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:11.677263 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:11.763767 2154103 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 00:43:11.769610 2154103 fix.go:56] duration metric: took 5.247390127s for fixHost
	I0328 00:43:11.769636 2154103 start.go:83] releasing machines lock for "old-k8s-version-847679", held for 5.247455422s
	I0328 00:43:11.769708 2154103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-847679
	I0328 00:43:11.798765 2154103 ssh_runner.go:195] Run: cat /version.json
	I0328 00:43:11.798813 2154103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:43:11.798828 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:11.798869 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:11.821500 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:11.831087 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:11.917598 2154103 ssh_runner.go:195] Run: systemctl --version
	I0328 00:43:12.037513 2154103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 00:43:12.043420 2154103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0328 00:43:12.070789 2154103 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0328 00:43:12.070900 2154103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:43:12.080928 2154103 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:43:12.080972 2154103 start.go:494] detecting cgroup driver to use...
	I0328 00:43:12.081030 2154103 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 00:43:12.081105 2154103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 00:43:12.097035 2154103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:43:12.110536 2154103 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:43:12.110644 2154103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:43:12.124863 2154103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:43:12.137812 2154103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:43:12.239326 2154103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:43:12.343239 2154103 docker.go:233] disabling docker service ...
	I0328 00:43:12.343349 2154103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:43:12.357369 2154103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:43:12.369468 2154103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:43:12.480709 2154103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:43:12.600532 2154103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:43:12.613111 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:43:12.630504 2154103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0328 00:43:12.640706 2154103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 00:43:12.650693 2154103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 00:43:12.650810 2154103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 00:43:12.660882 2154103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:43:12.670604 2154103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 00:43:12.684436 2154103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:43:12.697973 2154103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:43:12.710428 2154103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 00:43:12.726524 2154103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:43:12.736898 2154103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:43:12.755278 2154103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:43:12.868266 2154103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 00:43:13.114532 2154103 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0328 00:43:13.114605 2154103 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0328 00:43:13.125423 2154103 start.go:562] Will wait 60s for crictl version
	I0328 00:43:13.125501 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:43:13.129610 2154103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:43:13.184871 2154103 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0328 00:43:13.184945 2154103 ssh_runner.go:195] Run: containerd --version
	I0328 00:43:13.205584 2154103 ssh_runner.go:195] Run: containerd --version
	I0328 00:43:13.233003 2154103 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0328 00:43:13.235249 2154103 cli_runner.go:164] Run: docker network inspect old-k8s-version-847679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 00:43:13.258205 2154103 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0328 00:43:13.262030 2154103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:43:13.273513 2154103 kubeadm.go:877] updating cluster {Name:old-k8s-version-847679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:43:13.273655 2154103 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0328 00:43:13.273711 2154103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:43:13.324624 2154103 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 00:43:13.324644 2154103 containerd.go:534] Images already preloaded, skipping extraction
	I0328 00:43:13.324705 2154103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:43:13.374105 2154103 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 00:43:13.374130 2154103 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:43:13.374138 2154103 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0328 00:43:13.374247 2154103 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-847679 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:43:13.374315 2154103 ssh_runner.go:195] Run: sudo crictl info
	I0328 00:43:13.424694 2154103 cni.go:84] Creating CNI manager for ""
	I0328 00:43:13.424721 2154103 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 00:43:13.424732 2154103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:43:13.424752 2154103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847679 NodeName:old-k8s-version-847679 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 00:43:13.424886 2154103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-847679"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:43:13.424955 2154103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 00:43:13.433888 2154103 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:43:13.433976 2154103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:43:13.450783 2154103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0328 00:43:13.480831 2154103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:43:13.498582 2154103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0328 00:43:13.518402 2154103 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0328 00:43:13.522352 2154103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:43:13.533088 2154103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:43:13.638177 2154103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:43:13.652134 2154103 certs.go:68] Setting up /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679 for IP: 192.168.85.2
	I0328 00:43:13.652157 2154103 certs.go:194] generating shared ca certs ...
	I0328 00:43:13.652173 2154103 certs.go:226] acquiring lock for ca certs: {Name:mka210db6b2adfd3b9800e3583e6835c01f5e440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:43:13.652310 2154103 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key
	I0328 00:43:13.652358 2154103 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key
	I0328 00:43:13.652368 2154103 certs.go:256] generating profile certs ...
	I0328 00:43:13.652454 2154103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.key
	I0328 00:43:13.652559 2154103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/apiserver.key.27f1250e
	I0328 00:43:13.652604 2154103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/proxy-client.key
	I0328 00:43:13.652716 2154103 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/1957141.pem (1338 bytes)
	W0328 00:43:13.652745 2154103 certs.go:480] ignoring /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/1957141_empty.pem, impossibly tiny 0 bytes
	I0328 00:43:13.652758 2154103 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 00:43:13.652787 2154103 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:43:13.652816 2154103 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:43:13.652849 2154103 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem (1675 bytes)
	I0328 00:43:13.652916 2154103 certs.go:484] found cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/files/etc/ssl/certs/19571412.pem (1708 bytes)
	I0328 00:43:13.653562 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:43:13.747250 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:43:13.808491 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:43:13.833178 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:43:13.857792 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 00:43:13.884108 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 00:43:13.908207 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:43:13.938435 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:43:13.962876 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:43:13.987166 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/1957141.pem --> /usr/share/ca-certificates/1957141.pem (1338 bytes)
	I0328 00:43:14.013381 2154103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/files/etc/ssl/certs/19571412.pem --> /usr/share/ca-certificates/19571412.pem (1708 bytes)
	I0328 00:43:14.045126 2154103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:43:14.064082 2154103 ssh_runner.go:195] Run: openssl version
	I0328 00:43:14.070512 2154103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19571412.pem && ln -fs /usr/share/ca-certificates/19571412.pem /etc/ssl/certs/19571412.pem"
	I0328 00:43:14.080448 2154103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19571412.pem
	I0328 00:43:14.084801 2154103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 28 00:02 /usr/share/ca-certificates/19571412.pem
	I0328 00:43:14.084910 2154103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19571412.pem
	I0328 00:43:14.092543 2154103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19571412.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:43:14.103066 2154103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:43:14.112459 2154103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:43:14.116410 2154103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:43:14.116528 2154103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:43:14.124165 2154103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:43:14.135465 2154103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1957141.pem && ln -fs /usr/share/ca-certificates/1957141.pem /etc/ssl/certs/1957141.pem"
	I0328 00:43:14.149254 2154103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1957141.pem
	I0328 00:43:14.153311 2154103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 28 00:02 /usr/share/ca-certificates/1957141.pem
	I0328 00:43:14.153387 2154103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1957141.pem
	I0328 00:43:14.161425 2154103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1957141.pem /etc/ssl/certs/51391683.0"
	I0328 00:43:14.176132 2154103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:43:14.180208 2154103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:43:14.187958 2154103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:43:14.195383 2154103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:43:14.202713 2154103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:43:14.210145 2154103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:43:14.217462 2154103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:43:14.224755 2154103 kubeadm.go:391] StartCluster: {Name:old-k8s-version-847679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:43:14.224873 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0328 00:43:14.224939 2154103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:43:14.281656 2154103 cri.go:89] found id: "d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511"
	I0328 00:43:14.281691 2154103 cri.go:89] found id: "06e06b5cdd6b08bf5c5206518802f976d7a955622f1fa208f9a9e4d98c933d37"
	I0328 00:43:14.281711 2154103 cri.go:89] found id: "1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4"
	I0328 00:43:14.281718 2154103 cri.go:89] found id: "2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874"
	I0328 00:43:14.281722 2154103 cri.go:89] found id: "85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757"
	I0328 00:43:14.281732 2154103 cri.go:89] found id: "44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd"
	I0328 00:43:14.281736 2154103 cri.go:89] found id: "50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c"
	I0328 00:43:14.281739 2154103 cri.go:89] found id: "c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f"
	I0328 00:43:14.281742 2154103 cri.go:89] found id: ""
	I0328 00:43:14.281806 2154103 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0328 00:43:14.295084 2154103 cri.go:116] JSON = null
	W0328 00:43:14.295141 2154103 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0328 00:43:14.295234 2154103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 00:43:14.305536 2154103 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 00:43:14.305567 2154103 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 00:43:14.305574 2154103 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 00:43:14.305622 2154103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 00:43:14.314712 2154103 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:43:14.315212 2154103 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-847679" does not appear in /home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0328 00:43:14.315354 2154103 kubeconfig.go:62] /home/jenkins/minikube-integration/18158-1951721/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-847679" cluster setting kubeconfig missing "old-k8s-version-847679" context setting]
	I0328 00:43:14.315745 2154103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/kubeconfig: {Name:mk4e0e309c01b086d75fed1e6a33183905fae8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:43:14.318304 2154103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 00:43:14.327937 2154103 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0328 00:43:14.327979 2154103 kubeadm.go:591] duration metric: took 22.39942ms to restartPrimaryControlPlane
	I0328 00:43:14.328001 2154103 kubeadm.go:393] duration metric: took 103.256167ms to StartCluster
	I0328 00:43:14.328028 2154103 settings.go:142] acquiring lock: {Name:mk8bd0eb5f984b7df18eb5fe3af15aec887e343a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:43:14.328097 2154103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0328 00:43:14.328860 2154103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/kubeconfig: {Name:mk4e0e309c01b086d75fed1e6a33183905fae8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:43:14.329091 2154103 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0328 00:43:14.334732 2154103 out.go:177] * Verifying Kubernetes components...
	I0328 00:43:14.329470 2154103 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 00:43:14.329549 2154103 config.go:182] Loaded profile config "old-k8s-version-847679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0328 00:43:14.336665 2154103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:43:14.336694 2154103 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-847679"
	I0328 00:43:14.337004 2154103 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-847679"
	W0328 00:43:14.337019 2154103 addons.go:243] addon storage-provisioner should already be in state true
	I0328 00:43:14.337046 2154103 host.go:66] Checking if "old-k8s-version-847679" exists ...
	I0328 00:43:14.337476 2154103 cli_runner.go:164] Run: docker container inspect old-k8s-version-847679 --format={{.State.Status}}
	I0328 00:43:14.336704 2154103 addons.go:69] Setting dashboard=true in profile "old-k8s-version-847679"
	I0328 00:43:14.337714 2154103 addons.go:234] Setting addon dashboard=true in "old-k8s-version-847679"
	W0328 00:43:14.337728 2154103 addons.go:243] addon dashboard should already be in state true
	I0328 00:43:14.337750 2154103 host.go:66] Checking if "old-k8s-version-847679" exists ...
	I0328 00:43:14.338253 2154103 cli_runner.go:164] Run: docker container inspect old-k8s-version-847679 --format={{.State.Status}}
	I0328 00:43:14.336710 2154103 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-847679"
	I0328 00:43:14.341770 2154103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-847679"
	I0328 00:43:14.336716 2154103 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-847679"
	I0328 00:43:14.341896 2154103 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-847679"
	W0328 00:43:14.341940 2154103 addons.go:243] addon metrics-server should already be in state true
	I0328 00:43:14.341970 2154103 host.go:66] Checking if "old-k8s-version-847679" exists ...
	I0328 00:43:14.342287 2154103 cli_runner.go:164] Run: docker container inspect old-k8s-version-847679 --format={{.State.Status}}
	I0328 00:43:14.342390 2154103 cli_runner.go:164] Run: docker container inspect old-k8s-version-847679 --format={{.State.Status}}
	I0328 00:43:14.408648 2154103 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0328 00:43:14.410412 2154103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:43:14.412175 2154103 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:43:14.412194 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 00:43:14.412268 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:14.418158 2154103 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0328 00:43:14.420210 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0328 00:43:14.420233 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0328 00:43:14.420307 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:14.429138 2154103 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 00:43:14.433146 2154103 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 00:43:14.433175 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 00:43:14.433253 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:14.435278 2154103 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-847679"
	W0328 00:43:14.435296 2154103 addons.go:243] addon default-storageclass should already be in state true
	I0328 00:43:14.435320 2154103 host.go:66] Checking if "old-k8s-version-847679" exists ...
	I0328 00:43:14.435720 2154103 cli_runner.go:164] Run: docker container inspect old-k8s-version-847679 --format={{.State.Status}}
	I0328 00:43:14.490610 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:14.504001 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:14.507760 2154103 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 00:43:14.507780 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 00:43:14.507843 2154103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-847679
	I0328 00:43:14.530119 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:14.543698 2154103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35334 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/old-k8s-version-847679/id_rsa Username:docker}
	I0328 00:43:14.610808 2154103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:43:14.685786 2154103 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-847679" to be "Ready" ...
	I0328 00:43:14.709768 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0328 00:43:14.709841 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0328 00:43:14.759802 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:43:14.774409 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0328 00:43:14.774434 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0328 00:43:14.786765 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 00:43:14.789542 2154103 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 00:43:14.789565 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 00:43:14.851362 2154103 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 00:43:14.851389 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 00:43:14.895397 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0328 00:43:14.895427 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0328 00:43:15.001052 2154103 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 00:43:15.001082 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 00:43:15.032580 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0328 00:43:15.032608 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0328 00:43:15.088776 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:15.088885 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.088903 2154103 retry.go:31] will retry after 300.101164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.088821 2154103 retry.go:31] will retry after 363.909831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.095371 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 00:43:15.115910 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0328 00:43:15.115950 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0328 00:43:15.202962 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0328 00:43:15.202988 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0328 00:43:15.225778 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.225807 2154103 retry.go:31] will retry after 231.963596ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.245033 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0328 00:43:15.245063 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0328 00:43:15.270622 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0328 00:43:15.270650 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0328 00:43:15.290875 2154103 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 00:43:15.290902 2154103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0328 00:43:15.310757 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 00:43:15.389993 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 00:43:15.400198 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.400232 2154103 retry.go:31] will retry after 347.716597ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.453550 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:43:15.458476 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 00:43:15.528487 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.528525 2154103 retry.go:31] will retry after 359.334235ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:15.691632 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.691666 2154103 retry.go:31] will retry after 194.220884ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:15.699500 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.699548 2154103 retry.go:31] will retry after 336.681973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.748831 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 00:43:15.848510 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.848573 2154103 retry.go:31] will retry after 504.100067ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:15.886744 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:43:15.888015 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0328 00:43:16.036992 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 00:43:16.081805 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.081842 2154103 retry.go:31] will retry after 803.110746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:16.081885 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.081898 2154103 retry.go:31] will retry after 382.319344ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:16.161897 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.161953 2154103 retry.go:31] will retry after 666.058839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.353190 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 00:43:16.452934 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.452971 2154103 retry.go:31] will retry after 529.050799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.465188 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 00:43:16.560660 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.560695 2154103 retry.go:31] will retry after 1.240606514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.687410 2154103 node_ready.go:53] error getting node "old-k8s-version-847679": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-847679": dial tcp 192.168.85.2:8443: connect: connection refused
	I0328 00:43:16.828611 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 00:43:16.886008 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 00:43:16.955121 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.955149 2154103 retry.go:31] will retry after 1.005412432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:16.982353 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 00:43:17.043213 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:17.043243 2154103 retry.go:31] will retry after 1.031423792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:17.120744 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:17.120781 2154103 retry.go:31] will retry after 989.901002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:17.801717 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 00:43:17.900986 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:17.901016 2154103 retry.go:31] will retry after 1.763928504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:17.961408 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 00:43:18.065762 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:18.065795 2154103 retry.go:31] will retry after 1.042148133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:18.075139 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:43:18.111690 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 00:43:18.202246 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:18.202275 2154103 retry.go:31] will retry after 853.190096ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:18.245451 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:18.245479 2154103 retry.go:31] will retry after 1.40302728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:19.055655 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:43:19.109069 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 00:43:19.177564 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:19.177593 2154103 retry.go:31] will retry after 1.566947156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:19.187100 2154103 node_ready.go:53] error getting node "old-k8s-version-847679": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-847679": dial tcp 192.168.85.2:8443: connect: connection refused
	W0328 00:43:19.256651 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:19.256679 2154103 retry.go:31] will retry after 1.780176461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:19.649394 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 00:43:19.665366 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 00:43:19.765772 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:19.765804 2154103 retry.go:31] will retry after 1.353563272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:19.767197 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:19.767223 2154103 retry.go:31] will retry after 1.464781362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:20.745361 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 00:43:20.877418 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:20.877464 2154103 retry.go:31] will retry after 3.50680446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:21.037864 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 00:43:21.120399 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 00:43:21.194386 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:21.194421 2154103 retry.go:31] will retry after 4.097485753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:21.233005 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 00:43:21.329132 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:21.329162 2154103 retry.go:31] will retry after 3.64958077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 00:43:21.432016 2154103 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:21.432048 2154103 retry.go:31] will retry after 4.156842912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 00:43:21.686822 2154103 node_ready.go:53] error getting node "old-k8s-version-847679": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-847679": dial tcp 192.168.85.2:8443: connect: connection refused
	I0328 00:43:23.687113 2154103 node_ready.go:53] error getting node "old-k8s-version-847679": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-847679": dial tcp 192.168.85.2:8443: connect: connection refused
	I0328 00:43:24.384465 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:43:24.979426 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 00:43:25.292156 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 00:43:25.589243 2154103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0328 00:43:34.188472 2154103 node_ready.go:53] error getting node "old-k8s-version-847679": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-847679": net/http: TLS handshake timeout
	I0328 00:43:34.770784 2154103 node_ready.go:49] node "old-k8s-version-847679" has status "Ready":"True"
	I0328 00:43:34.770806 2154103 node_ready.go:38] duration metric: took 20.084938931s for node "old-k8s-version-847679" to be "Ready" ...
	I0328 00:43:34.770816 2154103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:43:35.122678 2154103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-fx44w" in "kube-system" namespace to be "Ready" ...
	I0328 00:43:35.160165 2154103 pod_ready.go:92] pod "coredns-74ff55c5b-fx44w" in "kube-system" namespace has status "Ready":"True"
	I0328 00:43:35.160244 2154103 pod_ready.go:81] duration metric: took 37.486526ms for pod "coredns-74ff55c5b-fx44w" in "kube-system" namespace to be "Ready" ...
	I0328 00:43:35.160272 2154103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-847679" in "kube-system" namespace to be "Ready" ...
	I0328 00:43:35.414858 2154103 pod_ready.go:92] pod "etcd-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"True"
	I0328 00:43:35.414932 2154103 pod_ready.go:81] duration metric: took 254.637287ms for pod "etcd-old-k8s-version-847679" in "kube-system" namespace to be "Ready" ...
	I0328 00:43:35.414962 2154103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-847679" in "kube-system" namespace to be "Ready" ...
	I0328 00:43:36.493513 2154103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.108968864s)
	I0328 00:43:36.661108 2154103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.368873781s)
	I0328 00:43:36.661278 2154103 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-847679"
	I0328 00:43:36.661220 2154103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.071909244s)
	I0328 00:43:36.661588 2154103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.682080469s)
	I0328 00:43:36.664127 2154103 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-847679 addons enable metrics-server
	
	I0328 00:43:36.668196 2154103 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0328 00:43:36.670279 2154103 addons.go:505] duration metric: took 22.340813823s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0328 00:43:37.427774 2154103 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:39.940864 2154103 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:42.422299 2154103 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:44.921728 2154103 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"True"
	I0328 00:43:44.921756 2154103 pod_ready.go:81] duration metric: took 9.506770453s for pod "kube-apiserver-old-k8s-version-847679" in "kube-system" namespace to be "Ready" ...
	I0328 00:43:44.921769 2154103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace to be "Ready" ...
	I0328 00:43:46.927917 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:48.929033 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:50.929447 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:53.429619 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:55.928652 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:57.929590 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:43:59.935713 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:02.428277 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:04.428964 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:06.430258 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:08.928464 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:10.928640 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:13.427720 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:15.927380 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:17.929067 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:20.428537 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:22.927568 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:24.930753 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:27.428993 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:29.429990 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:31.930079 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:33.930336 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:36.428548 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:38.927879 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:40.929586 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:43.427531 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:45.428338 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:47.428386 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:49.429159 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:51.929601 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:54.428960 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:56.927286 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:44:58.930712 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:01.428000 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:03.928448 2154103 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:04.427793 2154103 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"True"
	I0328 00:45:04.427825 2154103 pod_ready.go:81] duration metric: took 1m19.506043598s for pod "kube-controller-manager-old-k8s-version-847679" in "kube-system" namespace to be "Ready" ...
	I0328 00:45:04.427837 2154103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nlm92" in "kube-system" namespace to be "Ready" ...
	I0328 00:45:04.433103 2154103 pod_ready.go:92] pod "kube-proxy-nlm92" in "kube-system" namespace has status "Ready":"True"
	I0328 00:45:04.433127 2154103 pod_ready.go:81] duration metric: took 5.281673ms for pod "kube-proxy-nlm92" in "kube-system" namespace to be "Ready" ...
	I0328 00:45:04.433138 2154103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-847679" in "kube-system" namespace to be "Ready" ...
	I0328 00:45:04.439022 2154103 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-847679" in "kube-system" namespace has status "Ready":"True"
	I0328 00:45:04.439049 2154103 pod_ready.go:81] duration metric: took 5.903469ms for pod "kube-scheduler-old-k8s-version-847679" in "kube-system" namespace to be "Ready" ...
	I0328 00:45:04.439061 2154103 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace to be "Ready" ...
	I0328 00:45:06.446018 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:08.946802 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:11.445494 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:13.944907 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:15.945972 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:18.451216 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:20.946140 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:23.444811 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:25.445659 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:27.446814 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:29.945577 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:32.444441 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:34.444831 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:36.446362 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:38.947679 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:41.445329 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:43.945351 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:45.946025 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:47.946100 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:50.445119 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:52.445407 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:54.445609 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:56.944643 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:45:58.944675 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:00.944927 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:03.444447 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:05.445755 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:07.945320 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:09.946107 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:12.445002 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:14.945253 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:16.946018 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:18.963152 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:21.445348 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:23.445681 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:25.445722 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:27.445959 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:29.945632 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:32.444687 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:34.445300 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:36.945401 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:38.945774 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:40.946078 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:43.445541 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:45.946146 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:48.445129 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:50.945417 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:52.946107 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:55.446075 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:57.945007 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:46:59.945530 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:01.946164 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:04.445405 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:06.478331 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:08.945228 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:10.945882 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:13.444804 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:15.446200 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:17.945829 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:19.947071 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:22.445946 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:24.945175 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:26.945879 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:29.445341 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:31.944600 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:33.945031 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:35.946263 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:38.445518 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:40.446729 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:42.945017 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:45.444469 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:47.445325 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:49.945720 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:52.444444 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:54.445504 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:56.445611 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:47:58.946546 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:01.445566 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:03.946202 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:06.445284 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:08.945995 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:11.444837 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:13.945700 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:16.445109 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:18.445397 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:20.945427 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:23.445224 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:25.445606 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:27.446362 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:29.945627 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:31.946013 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:33.948903 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:36.444921 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:38.445363 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:40.452386 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:42.947379 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:45.445029 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:47.491408 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:49.945180 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:51.946126 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:54.445014 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:56.445957 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:58.947147 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:49:01.448316 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:49:03.945495 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:49:04.444879 2154103 pod_ready.go:81] duration metric: took 4m0.005804606s for pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace to be "Ready" ...
	E0328 00:49:04.444914 2154103 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0328 00:49:04.444923 2154103 pod_ready.go:38] duration metric: took 5m29.674097361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:49:04.444938 2154103 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:49:04.444968 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0328 00:49:04.445023 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 00:49:04.560162 2154103 cri.go:89] found id: "7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210"
	I0328 00:49:04.560182 2154103 cri.go:89] found id: "c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f"
	I0328 00:49:04.560187 2154103 cri.go:89] found id: ""
	I0328 00:49:04.560194 2154103 logs.go:276] 2 containers: [7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210 c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f]
	I0328 00:49:04.560248 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.563906 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.567282 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0328 00:49:04.567347 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 00:49:04.638382 2154103 cri.go:89] found id: "a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff"
	I0328 00:49:04.638403 2154103 cri.go:89] found id: "85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757"
	I0328 00:49:04.638407 2154103 cri.go:89] found id: ""
	I0328 00:49:04.638415 2154103 logs.go:276] 2 containers: [a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff 85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757]
	I0328 00:49:04.638470 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.643037 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.646788 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0328 00:49:04.646859 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 00:49:04.722916 2154103 cri.go:89] found id: "4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e"
	I0328 00:49:04.722935 2154103 cri.go:89] found id: "d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511"
	I0328 00:49:04.722940 2154103 cri.go:89] found id: ""
	I0328 00:49:04.722947 2154103 logs.go:276] 2 containers: [4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511]
	I0328 00:49:04.723001 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.728811 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.738284 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0328 00:49:04.738363 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 00:49:04.823738 2154103 cri.go:89] found id: "d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc"
	I0328 00:49:04.823760 2154103 cri.go:89] found id: "44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd"
	I0328 00:49:04.823765 2154103 cri.go:89] found id: ""
	I0328 00:49:04.823774 2154103 logs.go:276] 2 containers: [d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc 44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd]
	I0328 00:49:04.823829 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.846516 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.850903 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0328 00:49:04.850976 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 00:49:04.920210 2154103 cri.go:89] found id: "9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00"
	I0328 00:49:04.920230 2154103 cri.go:89] found id: "2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874"
	I0328 00:49:04.920235 2154103 cri.go:89] found id: ""
	I0328 00:49:04.920248 2154103 logs.go:276] 2 containers: [9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00 2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874]
	I0328 00:49:04.920301 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.924378 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.930958 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 00:49:04.931031 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 00:49:05.002832 2154103 cri.go:89] found id: "929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378"
	I0328 00:49:05.002856 2154103 cri.go:89] found id: "50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c"
	I0328 00:49:05.002861 2154103 cri.go:89] found id: ""
	I0328 00:49:05.002870 2154103 logs.go:276] 2 containers: [929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378 50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c]
	I0328 00:49:05.002937 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.010203 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.018504 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0328 00:49:05.018586 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 00:49:05.071977 2154103 cri.go:89] found id: "5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0"
	I0328 00:49:05.072054 2154103 cri.go:89] found id: "1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4"
	I0328 00:49:05.072074 2154103 cri.go:89] found id: ""
	I0328 00:49:05.072097 2154103 logs.go:276] 2 containers: [5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0 1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4]
	I0328 00:49:05.072195 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.076028 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.079910 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 00:49:05.079990 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 00:49:05.136004 2154103 cri.go:89] found id: "13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4"
	I0328 00:49:05.136023 2154103 cri.go:89] found id: ""
	I0328 00:49:05.136031 2154103 logs.go:276] 1 containers: [13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4]
	I0328 00:49:05.136087 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.140367 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0328 00:49:05.140437 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 00:49:05.191809 2154103 cri.go:89] found id: "4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3"
	I0328 00:49:05.191884 2154103 cri.go:89] found id: "1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819"
	I0328 00:49:05.191903 2154103 cri.go:89] found id: ""
	I0328 00:49:05.191939 2154103 logs.go:276] 2 containers: [4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3 1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819]
	I0328 00:49:05.192037 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.196243 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.200480 2154103 logs.go:123] Gathering logs for storage-provisioner [1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819] ...
	I0328 00:49:05.200554 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819"
	I0328 00:49:05.261478 2154103 logs.go:123] Gathering logs for kubelet ...
	I0328 00:49:05.261552 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 00:49:05.322155 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.765623     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fjkcr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fjkcr" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.322436 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.766055     664 reflector.go:138] object-"default"/"default-token-n8nc9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-n8nc9" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.322682 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.766390     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-vb29g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vb29g" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.322931 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.767972     664 reflector.go:138] object-"kube-system"/"metrics-server-token-rprp4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-rprp4" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.323158 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.768316     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.323397 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.768642     664 reflector.go:138] object-"kube-system"/"coredns-token-wzdsg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-wzdsg" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.323631 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.768958     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.323877 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.769230     664 reflector.go:138] object-"kube-system"/"kindnet-token-qsh9h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qsh9h" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.339089 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:36 old-k8s-version-847679 kubelet[664]: E0328 00:43:36.787002     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.340673 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:37 old-k8s-version-847679 kubelet[664]: E0328 00:43:37.595435     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.343484 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:50 old-k8s-version-847679 kubelet[664]: E0328 00:43:50.261613     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.348909 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:02 old-k8s-version-847679 kubelet[664]: E0328 00:44:02.701753     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.349236 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:03 old-k8s-version-847679 kubelet[664]: E0328 00:44:03.252906     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.349572 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:03 old-k8s-version-847679 kubelet[664]: E0328 00:44:03.705765     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.349900 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:04 old-k8s-version-847679 kubelet[664]: E0328 00:44:04.828163     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.350354 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:08 old-k8s-version-847679 kubelet[664]: E0328 00:44:08.718226     664 pod_workers.go:191] Error syncing pod 74bdf3d9-c2be-48e7-a731-914196f71259 ("storage-provisioner_kube-system(74bdf3d9-c2be-48e7-a731-914196f71259)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(74bdf3d9-c2be-48e7-a731-914196f71259)"
	W0328 00:49:05.353146 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:16 old-k8s-version-847679 kubelet[664]: E0328 00:44:16.262924     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.353737 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:17 old-k8s-version-847679 kubelet[664]: E0328 00:44:17.754822     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.355371 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:24 old-k8s-version-847679 kubelet[664]: E0328 00:44:24.827598     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.355607 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:27 old-k8s-version-847679 kubelet[664]: E0328 00:44:27.249700     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.355822 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:38 old-k8s-version-847679 kubelet[664]: E0328 00:44:38.249891     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.356442 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:39 old-k8s-version-847679 kubelet[664]: E0328 00:44:39.810292     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.357224 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:44 old-k8s-version-847679 kubelet[664]: E0328 00:44:44.827990     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.357723 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:52 old-k8s-version-847679 kubelet[664]: E0328 00:44:52.250258     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.358116 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:58 old-k8s-version-847679 kubelet[664]: E0328 00:44:58.249954     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.363628 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:07 old-k8s-version-847679 kubelet[664]: E0328 00:45:07.258816     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.364001 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:09 old-k8s-version-847679 kubelet[664]: E0328 00:45:09.249294     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.364214 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:18 old-k8s-version-847679 kubelet[664]: E0328 00:45:18.250308     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.364835 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:22 old-k8s-version-847679 kubelet[664]: E0328 00:45:22.906546     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.365186 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:24 old-k8s-version-847679 kubelet[664]: E0328 00:45:24.834799     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.365399 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:31 old-k8s-version-847679 kubelet[664]: E0328 00:45:31.249728     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.365750 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:37 old-k8s-version-847679 kubelet[664]: E0328 00:45:37.249845     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.365993 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:43 old-k8s-version-847679 kubelet[664]: E0328 00:45:43.249752     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.366352 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:51 old-k8s-version-847679 kubelet[664]: E0328 00:45:51.249370     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.366568 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:57 old-k8s-version-847679 kubelet[664]: E0328 00:45:57.249692     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.366922 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:02 old-k8s-version-847679 kubelet[664]: E0328 00:46:02.250374     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.367141 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:11 old-k8s-version-847679 kubelet[664]: E0328 00:46:11.249700     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.367504 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:15 old-k8s-version-847679 kubelet[664]: E0328 00:46:15.250649     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.367739 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:23 old-k8s-version-847679 kubelet[664]: E0328 00:46:23.250077     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.368103 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:28 old-k8s-version-847679 kubelet[664]: E0328 00:46:28.250101     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.370604 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:38 old-k8s-version-847679 kubelet[664]: E0328 00:46:38.258712     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.371223 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:44 old-k8s-version-847679 kubelet[664]: E0328 00:46:44.112017     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.372974 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:45 old-k8s-version-847679 kubelet[664]: E0328 00:46:45.141180     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.373207 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:53 old-k8s-version-847679 kubelet[664]: E0328 00:46:53.262507     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.373562 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:59 old-k8s-version-847679 kubelet[664]: E0328 00:46:59.249998     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.373776 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:07 old-k8s-version-847679 kubelet[664]: E0328 00:47:07.249652     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.374150 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:10 old-k8s-version-847679 kubelet[664]: E0328 00:47:10.250041     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.374364 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:19 old-k8s-version-847679 kubelet[664]: E0328 00:47:19.249950     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.374713 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:23 old-k8s-version-847679 kubelet[664]: E0328 00:47:23.249461     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.374925 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:32 old-k8s-version-847679 kubelet[664]: E0328 00:47:32.250131     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.375510 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:34 old-k8s-version-847679 kubelet[664]: E0328 00:47:34.250682     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.375745 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:44 old-k8s-version-847679 kubelet[664]: E0328 00:47:44.250936     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.376111 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:45 old-k8s-version-847679 kubelet[664]: E0328 00:47:45.250327     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.376337 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:55 old-k8s-version-847679 kubelet[664]: E0328 00:47:55.253899     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.376692 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:58 old-k8s-version-847679 kubelet[664]: E0328 00:47:58.249397     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.376904 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:07 old-k8s-version-847679 kubelet[664]: E0328 00:48:07.249817     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.377280 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:13 old-k8s-version-847679 kubelet[664]: E0328 00:48:13.249493     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.377492 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:21 old-k8s-version-847679 kubelet[664]: E0328 00:48:21.249539     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.377844 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:24 old-k8s-version-847679 kubelet[664]: E0328 00:48:24.250324     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.378070 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:35 old-k8s-version-847679 kubelet[664]: E0328 00:48:35.249831     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.378434 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:37 old-k8s-version-847679 kubelet[664]: E0328 00:48:37.249346     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.378645 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:49 old-k8s-version-847679 kubelet[664]: E0328 00:48:49.249750     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.378995 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:52 old-k8s-version-847679 kubelet[664]: E0328 00:48:52.251036     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.379230 2154103 logs.go:138] Found kubelet problem: Mar 28 00:49:02 old-k8s-version-847679 kubelet[664]: E0328 00:49:02.250400     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.379648 2154103 logs.go:138] Found kubelet problem: Mar 28 00:49:04 old-k8s-version-847679 kubelet[664]: E0328 00:49:04.249958     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	I0328 00:49:05.379687 2154103 logs.go:123] Gathering logs for dmesg ...
	I0328 00:49:05.379726 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:49:05.399479 2154103 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:49:05.399565 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 00:49:05.599596 2154103 logs.go:123] Gathering logs for coredns [4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e] ...
	I0328 00:49:05.599666 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e"
	I0328 00:49:05.647238 2154103 logs.go:123] Gathering logs for kube-scheduler [44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd] ...
	I0328 00:49:05.647268 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd"
	I0328 00:49:05.703837 2154103 logs.go:123] Gathering logs for kube-controller-manager [929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378] ...
	I0328 00:49:05.703873 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378"
	I0328 00:49:05.787168 2154103 logs.go:123] Gathering logs for kindnet [5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0] ...
	I0328 00:49:05.787205 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0"
	I0328 00:49:05.846668 2154103 logs.go:123] Gathering logs for containerd ...
	I0328 00:49:05.846698 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0328 00:49:05.912726 2154103 logs.go:123] Gathering logs for coredns [d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511] ...
	I0328 00:49:05.912762 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511"
	I0328 00:49:05.964692 2154103 logs.go:123] Gathering logs for kube-proxy [9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00] ...
	I0328 00:49:05.964721 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00"
	I0328 00:49:06.028329 2154103 logs.go:123] Gathering logs for kube-proxy [2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874] ...
	I0328 00:49:06.028370 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874"
	I0328 00:49:06.085432 2154103 logs.go:123] Gathering logs for storage-provisioner [4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3] ...
	I0328 00:49:06.085465 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3"
	I0328 00:49:06.136140 2154103 logs.go:123] Gathering logs for kube-apiserver [7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210] ...
	I0328 00:49:06.136169 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210"
	I0328 00:49:06.233524 2154103 logs.go:123] Gathering logs for kube-apiserver [c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f] ...
	I0328 00:49:06.233559 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f"
	I0328 00:49:06.324936 2154103 logs.go:123] Gathering logs for etcd [85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757] ...
	I0328 00:49:06.324970 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757"
	I0328 00:49:06.423311 2154103 logs.go:123] Gathering logs for kube-scheduler [d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc] ...
	I0328 00:49:06.423386 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc"
	I0328 00:49:06.472151 2154103 logs.go:123] Gathering logs for kube-controller-manager [50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c] ...
	I0328 00:49:06.472177 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c"
	I0328 00:49:06.571794 2154103 logs.go:123] Gathering logs for kindnet [1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4] ...
	I0328 00:49:06.571830 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4"
	I0328 00:49:06.631887 2154103 logs.go:123] Gathering logs for kubernetes-dashboard [13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4] ...
	I0328 00:49:06.631916 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4"
	I0328 00:49:06.685104 2154103 logs.go:123] Gathering logs for etcd [a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff] ...
	I0328 00:49:06.685133 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff"
	I0328 00:49:06.742899 2154103 logs.go:123] Gathering logs for container status ...
	I0328 00:49:06.742926 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:49:06.824135 2154103 out.go:304] Setting ErrFile to fd 2...
	I0328 00:49:06.824160 2154103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 00:49:06.824234 2154103 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0328 00:49:06.824249 2154103 out.go:239]   Mar 28 00:48:37 old-k8s-version-847679 kubelet[664]: E0328 00:48:37.249346     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	  Mar 28 00:48:37 old-k8s-version-847679 kubelet[664]: E0328 00:48:37.249346     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:06.824259 2154103 out.go:239]   Mar 28 00:48:49 old-k8s-version-847679 kubelet[664]: E0328 00:48:49.249750     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 00:48:49 old-k8s-version-847679 kubelet[664]: E0328 00:48:49.249750     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:06.824403 2154103 out.go:239]   Mar 28 00:48:52 old-k8s-version-847679 kubelet[664]: E0328 00:48:52.251036     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	  Mar 28 00:48:52 old-k8s-version-847679 kubelet[664]: E0328 00:48:52.251036     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:06.824412 2154103 out.go:239]   Mar 28 00:49:02 old-k8s-version-847679 kubelet[664]: E0328 00:49:02.250400     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 00:49:02 old-k8s-version-847679 kubelet[664]: E0328 00:49:02.250400     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:06.824421 2154103 out.go:239]   Mar 28 00:49:04 old-k8s-version-847679 kubelet[664]: E0328 00:49:04.249958     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	  Mar 28 00:49:04 old-k8s-version-847679 kubelet[664]: E0328 00:49:04.249958     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	I0328 00:49:06.824438 2154103 out.go:304] Setting ErrFile to fd 2...
	I0328 00:49:06.824444 2154103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:49:16.825083 2154103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:49:16.839970 2154103 api_server.go:72] duration metric: took 6m2.51083602s to wait for apiserver process to appear ...
	I0328 00:49:16.840001 2154103 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:49:16.842533 2154103 out.go:177] 
	W0328 00:49:16.844576 2154103 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0328 00:49:16.844594 2154103 out.go:239] * 
	* 
	W0328 00:49:16.845547 2154103 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:49:16.847340 2154103 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-847679 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-847679
helpers_test.go:235: (dbg) docker inspect old-k8s-version-847679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3cb0e60f8e3b85ba836fcdeaa78c9848a8b3637f47db7b54d107c9a7c03efa01",
	        "Created": "2024-03-28T00:40:20.040482719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2154294,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-28T00:43:06.88438028Z",
	            "FinishedAt": "2024-03-28T00:43:05.288960964Z"
	        },
	        "Image": "sha256:f9b5358e8c18dbe49e632154cad75e0968b2e103f621caff2c3ed996f4155861",
	        "ResolvConfPath": "/var/lib/docker/containers/3cb0e60f8e3b85ba836fcdeaa78c9848a8b3637f47db7b54d107c9a7c03efa01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3cb0e60f8e3b85ba836fcdeaa78c9848a8b3637f47db7b54d107c9a7c03efa01/hostname",
	        "HostsPath": "/var/lib/docker/containers/3cb0e60f8e3b85ba836fcdeaa78c9848a8b3637f47db7b54d107c9a7c03efa01/hosts",
	        "LogPath": "/var/lib/docker/containers/3cb0e60f8e3b85ba836fcdeaa78c9848a8b3637f47db7b54d107c9a7c03efa01/3cb0e60f8e3b85ba836fcdeaa78c9848a8b3637f47db7b54d107c9a7c03efa01-json.log",
	        "Name": "/old-k8s-version-847679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-847679:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-847679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/81df83008b5b166091ba4b29963a424545ec8c609b1ed9002b1e16d9645c8ba6-init/diff:/var/lib/docker/overlay2/07f877cb7d661b8e8bf24e390c9cea61396c20d4f4c8c6395f4b5d699fc104ea/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81df83008b5b166091ba4b29963a424545ec8c609b1ed9002b1e16d9645c8ba6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81df83008b5b166091ba4b29963a424545ec8c609b1ed9002b1e16d9645c8ba6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81df83008b5b166091ba4b29963a424545ec8c609b1ed9002b1e16d9645c8ba6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-847679",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-847679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-847679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-847679",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-847679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e8bdad9f5841f40fcf6e92ffd4f1f0d7b57203c0862007045fa6dd2d585a79a",
	            "SandboxKey": "/var/run/docker/netns/6e8bdad9f584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35334"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35333"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35330"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35332"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35331"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-847679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "3a76509f801b4a3516e542189bc148bd8a0846431a7c1e3e34f88f3112fe0c94",
	                    "EndpointID": "240c9d85a78e79b7b3d63756ffa1fc1280635be00c59b8708ae56cc9c53dfc94",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-847679",
	                        "3cb0e60f8e3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-847679 -n old-k8s-version-847679
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-847679 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-847679 logs -n 25: (2.438213755s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-658183                              | cert-expiration-658183   | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:38 UTC | 28 Mar 24 00:39 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	| ssh     | force-systemd-env-772681                               | force-systemd-env-772681 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:39 UTC | 28 Mar 24 00:39 UTC |
	|         | ssh cat                                                |                          |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |                |                     |                     |
	| delete  | -p force-systemd-env-772681                            | force-systemd-env-772681 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:39 UTC | 28 Mar 24 00:39 UTC |
	| start   | -p cert-options-240788                                 | cert-options-240788      | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:39 UTC | 28 Mar 24 00:40 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |                |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |                |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |                |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	| ssh     | cert-options-240788 ssh                                | cert-options-240788      | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:40 UTC | 28 Mar 24 00:40 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |                |                     |                     |
	| ssh     | -p cert-options-240788 -- sudo                         | cert-options-240788      | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:40 UTC | 28 Mar 24 00:40 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |                |                     |                     |
	| delete  | -p cert-options-240788                                 | cert-options-240788      | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:40 UTC | 28 Mar 24 00:40 UTC |
	| start   | -p old-k8s-version-847679                              | old-k8s-version-847679   | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:40 UTC | 28 Mar 24 00:42 UTC |
	|         | --memory=2200                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --kvm-network=default                                  |                          |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |                |                     |                     |
	|         | --keep-context=false                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |                |                     |                     |
	| start   | -p cert-expiration-658183                              | cert-expiration-658183   | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:42 UTC | 28 Mar 24 00:42 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	| delete  | -p cert-expiration-658183                              | cert-expiration-658183   | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:42 UTC | 28 Mar 24 00:42 UTC |
	| start   | -p no-preload-137753 --memory=2200                     | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:42 UTC | 28 Mar 24 00:43 UTC |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --preload=false --driver=docker                        |                          |         |                |                     |                     |
	|         |  --container-runtime=containerd                        |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                          |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847679        | old-k8s-version-847679   | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:42 UTC | 28 Mar 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |                |                     |                     |
	| stop    | -p old-k8s-version-847679                              | old-k8s-version-847679   | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:42 UTC | 28 Mar 24 00:43 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847679             | old-k8s-version-847679   | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:43 UTC | 28 Mar 24 00:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |                |                     |                     |
	| start   | -p old-k8s-version-847679                              | old-k8s-version-847679   | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:43 UTC |                     |
	|         | --memory=2200                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --kvm-network=default                                  |                          |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |                |                     |                     |
	|         | --keep-context=false                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-137753             | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:44 UTC | 28 Mar 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |                |                     |                     |
	| stop    | -p no-preload-137753                                   | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:44 UTC | 28 Mar 24 00:44 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-137753                  | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:44 UTC | 28 Mar 24 00:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |                |                     |                     |
	| start   | -p no-preload-137753 --memory=2200                     | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:44 UTC | 28 Mar 24 00:48 UTC |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --preload=false --driver=docker                        |                          |         |                |                     |                     |
	|         |  --container-runtime=containerd                        |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                          |         |                |                     |                     |
	| image   | no-preload-137753 image list                           | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:48 UTC | 28 Mar 24 00:48 UTC |
	|         | --format=json                                          |                          |         |                |                     |                     |
	| pause   | -p no-preload-137753                                   | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:48 UTC | 28 Mar 24 00:48 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |                |                     |                     |
	| unpause | -p no-preload-137753                                   | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:48 UTC | 28 Mar 24 00:48 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |                |                     |                     |
	| delete  | -p no-preload-137753                                   | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:49 UTC | 28 Mar 24 00:49 UTC |
	| delete  | -p no-preload-137753                                   | no-preload-137753        | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:49 UTC | 28 Mar 24 00:49 UTC |
	| start   | -p embed-certs-705455                                  | embed-certs-705455       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:49 UTC |                     |
	|         | --memory=2200                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                          |         |                |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:49:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:49:03.815775 2164752 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:49:03.815925 2164752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:49:03.815950 2164752 out.go:304] Setting ErrFile to fd 2...
	I0328 00:49:03.815969 2164752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:49:03.816224 2164752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:49:03.816686 2164752 out.go:298] Setting JSON to false
	I0328 00:49:03.817856 2164752 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30682,"bootTime":1711556262,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 00:49:03.818045 2164752 start.go:139] virtualization:  
	I0328 00:49:03.821184 2164752 out.go:177] * [embed-certs-705455] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 00:49:03.823516 2164752 out.go:177]   - MINIKUBE_LOCATION=18158
	I0328 00:49:03.825516 2164752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:49:03.823604 2164752 notify.go:220] Checking for updates...
	I0328 00:49:03.827837 2164752 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0328 00:49:03.829771 2164752 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0328 00:49:03.831813 2164752 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 00:49:03.833499 2164752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:49:03.835589 2164752 config.go:182] Loaded profile config "old-k8s-version-847679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0328 00:49:03.835674 2164752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:49:03.855727 2164752 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 00:49:03.855853 2164752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:49:03.926915 2164752 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 00:49:03.91693636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:49:03.927024 2164752 docker.go:295] overlay module found
	I0328 00:49:03.929537 2164752 out.go:177] * Using the docker driver based on user configuration
	I0328 00:49:03.931557 2164752 start.go:297] selected driver: docker
	I0328 00:49:03.931578 2164752 start.go:901] validating driver "docker" against <nil>
	I0328 00:49:03.931592 2164752 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:49:03.932198 2164752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:49:03.994709 2164752 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 00:49:03.983463271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:49:03.994885 2164752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 00:49:03.995111 2164752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:49:03.997520 2164752 out.go:177] * Using Docker driver with root privileges
	I0328 00:49:04.001056 2164752 cni.go:84] Creating CNI manager for ""
	I0328 00:49:04.001093 2164752 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 00:49:04.001105 2164752 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 00:49:04.001281 2164752 start.go:340] cluster config:
	{Name:embed-certs-705455 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-705455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contain
erd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:49:04.005655 2164752 out.go:177] * Starting "embed-certs-705455" primary control-plane node in "embed-certs-705455" cluster
	I0328 00:49:04.007622 2164752 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 00:49:04.010113 2164752 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0328 00:49:04.012052 2164752 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 00:49:04.012120 2164752 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0328 00:49:04.012134 2164752 cache.go:56] Caching tarball of preloaded images
	I0328 00:49:04.012254 2164752 preload.go:173] Found /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 00:49:04.012269 2164752 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0328 00:49:04.012381 2164752 profile.go:142] Saving config to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/embed-certs-705455/config.json ...
	I0328 00:49:04.012405 2164752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/embed-certs-705455/config.json: {Name:mka4401d9606bb028396fe7df3f79700f25b016a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:49:04.012578 2164752 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0328 00:49:04.026760 2164752 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon, skipping pull
	I0328 00:49:04.026790 2164752 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in daemon, skipping load
	I0328 00:49:04.026813 2164752 cache.go:194] Successfully downloaded all kic artifacts
	I0328 00:49:04.026841 2164752 start.go:360] acquireMachinesLock for embed-certs-705455: {Name:mk985ffaa99d4dc79074557f5e28b64bb5f327c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:49:04.027453 2164752 start.go:364] duration metric: took 582.675µs to acquireMachinesLock for "embed-certs-705455"
	I0328 00:49:04.027499 2164752 start.go:93] Provisioning new machine with config: &{Name:embed-certs-705455 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-705455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0328 00:49:04.027598 2164752 start.go:125] createHost starting for "" (driver="docker")
	I0328 00:49:01.448316 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:49:03.945495 2154103 pod_ready.go:102] pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace has status "Ready":"False"
	I0328 00:49:04.444879 2154103 pod_ready.go:81] duration metric: took 4m0.005804606s for pod "metrics-server-9975d5f86-8p64b" in "kube-system" namespace to be "Ready" ...
	E0328 00:49:04.444914 2154103 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0328 00:49:04.444923 2154103 pod_ready.go:38] duration metric: took 5m29.674097361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:49:04.444938 2154103 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:49:04.444968 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0328 00:49:04.445023 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 00:49:04.560162 2154103 cri.go:89] found id: "7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210"
	I0328 00:49:04.560182 2154103 cri.go:89] found id: "c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f"
	I0328 00:49:04.560187 2154103 cri.go:89] found id: ""
	I0328 00:49:04.560194 2154103 logs.go:276] 2 containers: [7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210 c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f]
	I0328 00:49:04.560248 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.563906 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.567282 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0328 00:49:04.567347 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 00:49:04.638382 2154103 cri.go:89] found id: "a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff"
	I0328 00:49:04.638403 2154103 cri.go:89] found id: "85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757"
	I0328 00:49:04.638407 2154103 cri.go:89] found id: ""
	I0328 00:49:04.638415 2154103 logs.go:276] 2 containers: [a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff 85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757]
	I0328 00:49:04.638470 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.643037 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.646788 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0328 00:49:04.646859 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 00:49:04.722916 2154103 cri.go:89] found id: "4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e"
	I0328 00:49:04.722935 2154103 cri.go:89] found id: "d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511"
	I0328 00:49:04.722940 2154103 cri.go:89] found id: ""
	I0328 00:49:04.722947 2154103 logs.go:276] 2 containers: [4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511]
	I0328 00:49:04.723001 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.728811 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.738284 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0328 00:49:04.738363 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 00:49:04.823738 2154103 cri.go:89] found id: "d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc"
	I0328 00:49:04.823760 2154103 cri.go:89] found id: "44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd"
	I0328 00:49:04.823765 2154103 cri.go:89] found id: ""
	I0328 00:49:04.823774 2154103 logs.go:276] 2 containers: [d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc 44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd]
	I0328 00:49:04.823829 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.846516 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.850903 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0328 00:49:04.850976 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 00:49:04.920210 2154103 cri.go:89] found id: "9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00"
	I0328 00:49:04.920230 2154103 cri.go:89] found id: "2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874"
	I0328 00:49:04.920235 2154103 cri.go:89] found id: ""
	I0328 00:49:04.920248 2154103 logs.go:276] 2 containers: [9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00 2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874]
	I0328 00:49:04.920301 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.924378 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:04.930958 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 00:49:04.931031 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 00:49:05.002832 2154103 cri.go:89] found id: "929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378"
	I0328 00:49:05.002856 2154103 cri.go:89] found id: "50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c"
	I0328 00:49:05.002861 2154103 cri.go:89] found id: ""
	I0328 00:49:05.002870 2154103 logs.go:276] 2 containers: [929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378 50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c]
	I0328 00:49:05.002937 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.010203 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.018504 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0328 00:49:05.018586 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 00:49:05.071977 2154103 cri.go:89] found id: "5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0"
	I0328 00:49:05.072054 2154103 cri.go:89] found id: "1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4"
	I0328 00:49:05.072074 2154103 cri.go:89] found id: ""
	I0328 00:49:05.072097 2154103 logs.go:276] 2 containers: [5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0 1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4]
	I0328 00:49:05.072195 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.076028 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.079910 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 00:49:05.079990 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 00:49:05.136004 2154103 cri.go:89] found id: "13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4"
	I0328 00:49:05.136023 2154103 cri.go:89] found id: ""
	I0328 00:49:05.136031 2154103 logs.go:276] 1 containers: [13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4]
	I0328 00:49:05.136087 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.140367 2154103 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0328 00:49:05.140437 2154103 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 00:49:05.191809 2154103 cri.go:89] found id: "4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3"
	I0328 00:49:05.191884 2154103 cri.go:89] found id: "1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819"
	I0328 00:49:05.191903 2154103 cri.go:89] found id: ""
	I0328 00:49:05.191939 2154103 logs.go:276] 2 containers: [4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3 1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819]
	I0328 00:49:05.192037 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.196243 2154103 ssh_runner.go:195] Run: which crictl
	I0328 00:49:05.200480 2154103 logs.go:123] Gathering logs for storage-provisioner [1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819] ...
	I0328 00:49:05.200554 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819"
	I0328 00:49:05.261478 2154103 logs.go:123] Gathering logs for kubelet ...
	I0328 00:49:05.261552 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 00:49:05.322155 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.765623     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fjkcr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fjkcr" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.322436 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.766055     664 reflector.go:138] object-"default"/"default-token-n8nc9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-n8nc9" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.322682 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.766390     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-vb29g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vb29g" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.322931 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.767972     664 reflector.go:138] object-"kube-system"/"metrics-server-token-rprp4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-rprp4" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.323158 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.768316     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.323397 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.768642     664 reflector.go:138] object-"kube-system"/"coredns-token-wzdsg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-wzdsg" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.323631 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.768958     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.323877 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:34 old-k8s-version-847679 kubelet[664]: E0328 00:43:34.769230     664 reflector.go:138] object-"kube-system"/"kindnet-token-qsh9h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qsh9h" is forbidden: User "system:node:old-k8s-version-847679" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-847679' and this object
	W0328 00:49:05.339089 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:36 old-k8s-version-847679 kubelet[664]: E0328 00:43:36.787002     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.340673 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:37 old-k8s-version-847679 kubelet[664]: E0328 00:43:37.595435     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.343484 2154103 logs.go:138] Found kubelet problem: Mar 28 00:43:50 old-k8s-version-847679 kubelet[664]: E0328 00:43:50.261613     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.348909 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:02 old-k8s-version-847679 kubelet[664]: E0328 00:44:02.701753     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.349236 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:03 old-k8s-version-847679 kubelet[664]: E0328 00:44:03.252906     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.349572 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:03 old-k8s-version-847679 kubelet[664]: E0328 00:44:03.705765     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.349900 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:04 old-k8s-version-847679 kubelet[664]: E0328 00:44:04.828163     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.350354 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:08 old-k8s-version-847679 kubelet[664]: E0328 00:44:08.718226     664 pod_workers.go:191] Error syncing pod 74bdf3d9-c2be-48e7-a731-914196f71259 ("storage-provisioner_kube-system(74bdf3d9-c2be-48e7-a731-914196f71259)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(74bdf3d9-c2be-48e7-a731-914196f71259)"
	W0328 00:49:05.353146 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:16 old-k8s-version-847679 kubelet[664]: E0328 00:44:16.262924     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.353737 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:17 old-k8s-version-847679 kubelet[664]: E0328 00:44:17.754822     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.355371 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:24 old-k8s-version-847679 kubelet[664]: E0328 00:44:24.827598     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.355607 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:27 old-k8s-version-847679 kubelet[664]: E0328 00:44:27.249700     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.355822 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:38 old-k8s-version-847679 kubelet[664]: E0328 00:44:38.249891     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.356442 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:39 old-k8s-version-847679 kubelet[664]: E0328 00:44:39.810292     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.357224 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:44 old-k8s-version-847679 kubelet[664]: E0328 00:44:44.827990     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.357723 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:52 old-k8s-version-847679 kubelet[664]: E0328 00:44:52.250258     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.358116 2154103 logs.go:138] Found kubelet problem: Mar 28 00:44:58 old-k8s-version-847679 kubelet[664]: E0328 00:44:58.249954     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.363628 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:07 old-k8s-version-847679 kubelet[664]: E0328 00:45:07.258816     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.364001 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:09 old-k8s-version-847679 kubelet[664]: E0328 00:45:09.249294     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.364214 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:18 old-k8s-version-847679 kubelet[664]: E0328 00:45:18.250308     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.364835 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:22 old-k8s-version-847679 kubelet[664]: E0328 00:45:22.906546     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.365186 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:24 old-k8s-version-847679 kubelet[664]: E0328 00:45:24.834799     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.365399 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:31 old-k8s-version-847679 kubelet[664]: E0328 00:45:31.249728     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.365750 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:37 old-k8s-version-847679 kubelet[664]: E0328 00:45:37.249845     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.365993 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:43 old-k8s-version-847679 kubelet[664]: E0328 00:45:43.249752     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.366352 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:51 old-k8s-version-847679 kubelet[664]: E0328 00:45:51.249370     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.366568 2154103 logs.go:138] Found kubelet problem: Mar 28 00:45:57 old-k8s-version-847679 kubelet[664]: E0328 00:45:57.249692     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.366922 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:02 old-k8s-version-847679 kubelet[664]: E0328 00:46:02.250374     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.367141 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:11 old-k8s-version-847679 kubelet[664]: E0328 00:46:11.249700     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.367504 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:15 old-k8s-version-847679 kubelet[664]: E0328 00:46:15.250649     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.367739 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:23 old-k8s-version-847679 kubelet[664]: E0328 00:46:23.250077     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.368103 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:28 old-k8s-version-847679 kubelet[664]: E0328 00:46:28.250101     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.370604 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:38 old-k8s-version-847679 kubelet[664]: E0328 00:46:38.258712     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 00:49:05.371223 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:44 old-k8s-version-847679 kubelet[664]: E0328 00:46:44.112017     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.372974 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:45 old-k8s-version-847679 kubelet[664]: E0328 00:46:45.141180     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.373207 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:53 old-k8s-version-847679 kubelet[664]: E0328 00:46:53.262507     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.373562 2154103 logs.go:138] Found kubelet problem: Mar 28 00:46:59 old-k8s-version-847679 kubelet[664]: E0328 00:46:59.249998     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.373776 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:07 old-k8s-version-847679 kubelet[664]: E0328 00:47:07.249652     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.374150 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:10 old-k8s-version-847679 kubelet[664]: E0328 00:47:10.250041     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.374364 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:19 old-k8s-version-847679 kubelet[664]: E0328 00:47:19.249950     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.374713 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:23 old-k8s-version-847679 kubelet[664]: E0328 00:47:23.249461     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.374925 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:32 old-k8s-version-847679 kubelet[664]: E0328 00:47:32.250131     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.375510 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:34 old-k8s-version-847679 kubelet[664]: E0328 00:47:34.250682     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.375745 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:44 old-k8s-version-847679 kubelet[664]: E0328 00:47:44.250936     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.376111 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:45 old-k8s-version-847679 kubelet[664]: E0328 00:47:45.250327     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.376337 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:55 old-k8s-version-847679 kubelet[664]: E0328 00:47:55.253899     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.376692 2154103 logs.go:138] Found kubelet problem: Mar 28 00:47:58 old-k8s-version-847679 kubelet[664]: E0328 00:47:58.249397     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.376904 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:07 old-k8s-version-847679 kubelet[664]: E0328 00:48:07.249817     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.377280 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:13 old-k8s-version-847679 kubelet[664]: E0328 00:48:13.249493     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.377492 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:21 old-k8s-version-847679 kubelet[664]: E0328 00:48:21.249539     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.377844 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:24 old-k8s-version-847679 kubelet[664]: E0328 00:48:24.250324     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.378070 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:35 old-k8s-version-847679 kubelet[664]: E0328 00:48:35.249831     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.378434 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:37 old-k8s-version-847679 kubelet[664]: E0328 00:48:37.249346     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.378645 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:49 old-k8s-version-847679 kubelet[664]: E0328 00:48:49.249750     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.378995 2154103 logs.go:138] Found kubelet problem: Mar 28 00:48:52 old-k8s-version-847679 kubelet[664]: E0328 00:48:52.251036     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:05.379230 2154103 logs.go:138] Found kubelet problem: Mar 28 00:49:02 old-k8s-version-847679 kubelet[664]: E0328 00:49:02.250400     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:05.379648 2154103 logs.go:138] Found kubelet problem: Mar 28 00:49:04 old-k8s-version-847679 kubelet[664]: E0328 00:49:04.249958     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	I0328 00:49:05.379687 2154103 logs.go:123] Gathering logs for dmesg ...
	I0328 00:49:05.379726 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:49:05.399479 2154103 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:49:05.399565 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 00:49:05.599596 2154103 logs.go:123] Gathering logs for coredns [4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e] ...
	I0328 00:49:05.599666 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e"
	I0328 00:49:05.647238 2154103 logs.go:123] Gathering logs for kube-scheduler [44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd] ...
	I0328 00:49:05.647268 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd"
	I0328 00:49:05.703837 2154103 logs.go:123] Gathering logs for kube-controller-manager [929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378] ...
	I0328 00:49:05.703873 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378"
	I0328 00:49:05.787168 2154103 logs.go:123] Gathering logs for kindnet [5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0] ...
	I0328 00:49:05.787205 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0"
	I0328 00:49:05.846668 2154103 logs.go:123] Gathering logs for containerd ...
	I0328 00:49:05.846698 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0328 00:49:05.912726 2154103 logs.go:123] Gathering logs for coredns [d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511] ...
	I0328 00:49:05.912762 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511"
	I0328 00:49:05.964692 2154103 logs.go:123] Gathering logs for kube-proxy [9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00] ...
	I0328 00:49:05.964721 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00"
	I0328 00:49:06.028329 2154103 logs.go:123] Gathering logs for kube-proxy [2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874] ...
	I0328 00:49:06.028370 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874"
	I0328 00:49:06.085432 2154103 logs.go:123] Gathering logs for storage-provisioner [4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3] ...
	I0328 00:49:06.085465 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3"
	I0328 00:49:06.136140 2154103 logs.go:123] Gathering logs for kube-apiserver [7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210] ...
	I0328 00:49:06.136169 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210"
	I0328 00:49:04.030030 2164752 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0328 00:49:04.030278 2164752 start.go:159] libmachine.API.Create for "embed-certs-705455" (driver="docker")
	I0328 00:49:04.030330 2164752 client.go:168] LocalClient.Create starting
	I0328 00:49:04.030404 2164752 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem
	I0328 00:49:04.030445 2164752 main.go:141] libmachine: Decoding PEM data...
	I0328 00:49:04.030463 2164752 main.go:141] libmachine: Parsing certificate...
	I0328 00:49:04.030545 2164752 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem
	I0328 00:49:04.030568 2164752 main.go:141] libmachine: Decoding PEM data...
	I0328 00:49:04.030578 2164752 main.go:141] libmachine: Parsing certificate...
	I0328 00:49:04.030942 2164752 cli_runner.go:164] Run: docker network inspect embed-certs-705455 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0328 00:49:04.043899 2164752 cli_runner.go:211] docker network inspect embed-certs-705455 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0328 00:49:04.044001 2164752 network_create.go:281] running [docker network inspect embed-certs-705455] to gather additional debugging logs...
	I0328 00:49:04.044027 2164752 cli_runner.go:164] Run: docker network inspect embed-certs-705455
	W0328 00:49:04.058263 2164752 cli_runner.go:211] docker network inspect embed-certs-705455 returned with exit code 1
	I0328 00:49:04.058299 2164752 network_create.go:284] error running [docker network inspect embed-certs-705455]: docker network inspect embed-certs-705455: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-705455 not found
	I0328 00:49:04.058313 2164752 network_create.go:286] output of [docker network inspect embed-certs-705455]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-705455 not found
	
	** /stderr **
	I0328 00:49:04.058420 2164752 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 00:49:04.072623 2164752 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f72df422d08 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e1:2e:26:ac} reservation:<nil>}
	I0328 00:49:04.073083 2164752 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-343e16c779e5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ab:20:b7:eb} reservation:<nil>}
	I0328 00:49:04.073493 2164752 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0caf71653905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:f0:74:41:50} reservation:<nil>}
	I0328 00:49:04.074145 2164752 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025b0580}
	I0328 00:49:04.074172 2164752 network_create.go:124] attempt to create docker network embed-certs-705455 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0328 00:49:04.074266 2164752 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-705455 embed-certs-705455
	I0328 00:49:04.140655 2164752 network_create.go:108] docker network embed-certs-705455 192.168.76.0/24 created
	I0328 00:49:04.140690 2164752 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-705455" container
	I0328 00:49:04.140760 2164752 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0328 00:49:04.153670 2164752 cli_runner.go:164] Run: docker volume create embed-certs-705455 --label name.minikube.sigs.k8s.io=embed-certs-705455 --label created_by.minikube.sigs.k8s.io=true
	I0328 00:49:04.168079 2164752 oci.go:103] Successfully created a docker volume embed-certs-705455
	I0328 00:49:04.168167 2164752 cli_runner.go:164] Run: docker run --rm --name embed-certs-705455-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-705455 --entrypoint /usr/bin/test -v embed-certs-705455:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib
	I0328 00:49:04.778503 2164752 oci.go:107] Successfully prepared a docker volume embed-certs-705455
	I0328 00:49:04.778540 2164752 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 00:49:04.778559 2164752 kic.go:194] Starting extracting preloaded images to volume ...
	I0328 00:49:04.778648 2164752 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-705455:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0328 00:49:06.233524 2154103 logs.go:123] Gathering logs for kube-apiserver [c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f] ...
	I0328 00:49:06.233559 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f"
	I0328 00:49:06.324936 2154103 logs.go:123] Gathering logs for etcd [85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757] ...
	I0328 00:49:06.324970 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757"
	I0328 00:49:06.423311 2154103 logs.go:123] Gathering logs for kube-scheduler [d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc] ...
	I0328 00:49:06.423386 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc"
	I0328 00:49:06.472151 2154103 logs.go:123] Gathering logs for kube-controller-manager [50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c] ...
	I0328 00:49:06.472177 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c"
	I0328 00:49:06.571794 2154103 logs.go:123] Gathering logs for kindnet [1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4] ...
	I0328 00:49:06.571830 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4"
	I0328 00:49:06.631887 2154103 logs.go:123] Gathering logs for kubernetes-dashboard [13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4] ...
	I0328 00:49:06.631916 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4"
	I0328 00:49:06.685104 2154103 logs.go:123] Gathering logs for etcd [a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff] ...
	I0328 00:49:06.685133 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff"
	I0328 00:49:06.742899 2154103 logs.go:123] Gathering logs for container status ...
	I0328 00:49:06.742926 2154103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:49:06.824135 2154103 out.go:304] Setting ErrFile to fd 2...
	I0328 00:49:06.824160 2154103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 00:49:06.824234 2154103 out.go:239] X Problems detected in kubelet:
	W0328 00:49:06.824249 2154103 out.go:239]   Mar 28 00:48:37 old-k8s-version-847679 kubelet[664]: E0328 00:48:37.249346     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:06.824259 2154103 out.go:239]   Mar 28 00:48:49 old-k8s-version-847679 kubelet[664]: E0328 00:48:49.249750     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:06.824403 2154103 out.go:239]   Mar 28 00:48:52 old-k8s-version-847679 kubelet[664]: E0328 00:48:52.251036     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	W0328 00:49:06.824412 2154103 out.go:239]   Mar 28 00:49:02 old-k8s-version-847679 kubelet[664]: E0328 00:49:02.250400     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 00:49:06.824421 2154103 out.go:239]   Mar 28 00:49:04 old-k8s-version-847679 kubelet[664]: E0328 00:49:04.249958     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	I0328 00:49:06.824438 2154103 out.go:304] Setting ErrFile to fd 2...
	I0328 00:49:06.824444 2154103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:49:10.166286 2164752 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-705455:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.387600549s)
	I0328 00:49:10.166327 2164752 kic.go:203] duration metric: took 5.387762231s to extract preloaded images to volume ...
	W0328 00:49:10.166495 2164752 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0328 00:49:10.166737 2164752 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0328 00:49:10.225770 2164752 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-705455 --name embed-certs-705455 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-705455 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-705455 --network embed-certs-705455 --ip 192.168.76.2 --volume embed-certs-705455:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8
	I0328 00:49:10.530129 2164752 cli_runner.go:164] Run: docker container inspect embed-certs-705455 --format={{.State.Running}}
	I0328 00:49:10.552231 2164752 cli_runner.go:164] Run: docker container inspect embed-certs-705455 --format={{.State.Status}}
	I0328 00:49:10.578653 2164752 cli_runner.go:164] Run: docker exec embed-certs-705455 stat /var/lib/dpkg/alternatives/iptables
	I0328 00:49:10.650776 2164752 oci.go:144] the created container "embed-certs-705455" has a running status.
	I0328 00:49:10.650803 2164752 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/embed-certs-705455/id_rsa...
	I0328 00:49:11.326883 2164752 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/embed-certs-705455/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0328 00:49:11.349258 2164752 cli_runner.go:164] Run: docker container inspect embed-certs-705455 --format={{.State.Status}}
	I0328 00:49:11.374757 2164752 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0328 00:49:11.374779 2164752 kic_runner.go:114] Args: [docker exec --privileged embed-certs-705455 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0328 00:49:11.443773 2164752 cli_runner.go:164] Run: docker container inspect embed-certs-705455 --format={{.State.Status}}
	I0328 00:49:11.464317 2164752 machine.go:94] provisionDockerMachine start ...
	I0328 00:49:11.464407 2164752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-705455
	I0328 00:49:11.486233 2164752 main.go:141] libmachine: Using SSH client type: native
	I0328 00:49:11.486526 2164752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35344 <nil> <nil>}
	I0328 00:49:11.486540 2164752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:49:11.629445 2164752 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-705455
	
	I0328 00:49:11.629473 2164752 ubuntu.go:169] provisioning hostname "embed-certs-705455"
	I0328 00:49:11.629538 2164752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-705455
	I0328 00:49:11.647185 2164752 main.go:141] libmachine: Using SSH client type: native
	I0328 00:49:11.647439 2164752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35344 <nil> <nil>}
	I0328 00:49:11.647458 2164752 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-705455 && echo "embed-certs-705455" | sudo tee /etc/hostname
	I0328 00:49:11.792635 2164752 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-705455
	
	I0328 00:49:11.792732 2164752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-705455
	I0328 00:49:11.821152 2164752 main.go:141] libmachine: Using SSH client type: native
	I0328 00:49:11.821402 2164752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35344 <nil> <nil>}
	I0328 00:49:11.821426 2164752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-705455' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-705455/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-705455' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:49:11.950687 2164752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:49:11.950757 2164752 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18158-1951721/.minikube CaCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18158-1951721/.minikube}
	I0328 00:49:11.950819 2164752 ubuntu.go:177] setting up certificates
	I0328 00:49:11.950844 2164752 provision.go:84] configureAuth start
	I0328 00:49:11.950918 2164752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-705455
	I0328 00:49:11.974999 2164752 provision.go:143] copyHostCerts
	I0328 00:49:11.975060 2164752 exec_runner.go:144] found /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.pem, removing ...
	I0328 00:49:11.975070 2164752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.pem
	I0328 00:49:11.975143 2164752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/ca.pem (1078 bytes)
	I0328 00:49:11.975235 2164752 exec_runner.go:144] found /home/jenkins/minikube-integration/18158-1951721/.minikube/cert.pem, removing ...
	I0328 00:49:11.975241 2164752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18158-1951721/.minikube/cert.pem
	I0328 00:49:11.975267 2164752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/cert.pem (1123 bytes)
	I0328 00:49:11.975326 2164752 exec_runner.go:144] found /home/jenkins/minikube-integration/18158-1951721/.minikube/key.pem, removing ...
	I0328 00:49:11.975330 2164752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18158-1951721/.minikube/key.pem
	I0328 00:49:11.975353 2164752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18158-1951721/.minikube/key.pem (1675 bytes)
	I0328 00:49:11.975408 2164752 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca-key.pem org=jenkins.embed-certs-705455 san=[127.0.0.1 192.168.76.2 embed-certs-705455 localhost minikube]
	I0328 00:49:12.530708 2164752 provision.go:177] copyRemoteCerts
	I0328 00:49:12.530797 2164752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:49:12.530868 2164752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-705455
	I0328 00:49:12.546634 2164752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/embed-certs-705455/id_rsa Username:docker}
	I0328 00:49:12.638757 2164752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:49:12.666534 2164752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 00:49:12.691650 2164752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 00:49:12.715418 2164752 provision.go:87] duration metric: took 764.546934ms to configureAuth
	I0328 00:49:12.715446 2164752 ubuntu.go:193] setting minikube options for container-runtime
	I0328 00:49:12.715637 2164752 config.go:182] Loaded profile config "embed-certs-705455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:49:12.715650 2164752 machine.go:97] duration metric: took 1.251316112s to provisionDockerMachine
	I0328 00:49:12.715661 2164752 client.go:171] duration metric: took 8.685316781s to LocalClient.Create
	I0328 00:49:12.715687 2164752 start.go:167] duration metric: took 8.685397076s to libmachine.API.Create "embed-certs-705455"
	I0328 00:49:12.715699 2164752 start.go:293] postStartSetup for "embed-certs-705455" (driver="docker")
	I0328 00:49:12.715708 2164752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:49:12.715762 2164752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:49:12.715806 2164752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-705455
	I0328 00:49:12.731975 2164752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/embed-certs-705455/id_rsa Username:docker}
	I0328 00:49:12.823005 2164752 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:49:12.826330 2164752 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 00:49:12.826366 2164752 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 00:49:12.826377 2164752 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 00:49:12.826384 2164752 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 00:49:12.826394 2164752 filesync.go:126] Scanning /home/jenkins/minikube-integration/18158-1951721/.minikube/addons for local assets ...
	I0328 00:49:12.826452 2164752 filesync.go:126] Scanning /home/jenkins/minikube-integration/18158-1951721/.minikube/files for local assets ...
	I0328 00:49:12.826532 2164752 filesync.go:149] local asset: /home/jenkins/minikube-integration/18158-1951721/.minikube/files/etc/ssl/certs/19571412.pem -> 19571412.pem in /etc/ssl/certs
	I0328 00:49:12.826638 2164752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:49:12.834946 2164752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18158-1951721/.minikube/files/etc/ssl/certs/19571412.pem --> /etc/ssl/certs/19571412.pem (1708 bytes)
	I0328 00:49:12.859745 2164752 start.go:296] duration metric: took 144.032278ms for postStartSetup
	I0328 00:49:12.860125 2164752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-705455
	I0328 00:49:12.876645 2164752 profile.go:142] Saving config to /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/embed-certs-705455/config.json ...
	I0328 00:49:12.876930 2164752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:49:12.876989 2164752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-705455
	I0328 00:49:12.892962 2164752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/embed-certs-705455/id_rsa Username:docker}
	I0328 00:49:12.978654 2164752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 00:49:12.983201 2164752 start.go:128] duration metric: took 8.955585934s to createHost
	I0328 00:49:12.983224 2164752 start.go:83] releasing machines lock for "embed-certs-705455", held for 8.955749535s
	I0328 00:49:12.983308 2164752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-705455
	I0328 00:49:12.998686 2164752 ssh_runner.go:195] Run: cat /version.json
	I0328 00:49:12.998740 2164752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-705455
	I0328 00:49:12.998761 2164752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:49:12.998813 2164752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-705455
	I0328 00:49:13.018572 2164752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/embed-certs-705455/id_rsa Username:docker}
	I0328 00:49:13.018579 2164752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/embed-certs-705455/id_rsa Username:docker}
	I0328 00:49:13.105540 2164752 ssh_runner.go:195] Run: systemctl --version
	I0328 00:49:13.218946 2164752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 00:49:13.223675 2164752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0328 00:49:13.247762 2164752 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0328 00:49:13.247847 2164752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:49:13.277724 2164752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0328 00:49:13.277795 2164752 start.go:494] detecting cgroup driver to use...
	I0328 00:49:13.277852 2164752 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 00:49:13.277982 2164752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 00:49:13.291380 2164752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:49:13.303517 2164752 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:49:13.303589 2164752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:49:13.318237 2164752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:49:13.333439 2164752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:49:13.435088 2164752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:49:13.533859 2164752 docker.go:233] disabling docker service ...
	I0328 00:49:13.533958 2164752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:49:13.555575 2164752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:49:13.567859 2164752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:49:13.658435 2164752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:49:13.751649 2164752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:49:13.763400 2164752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:49:13.780810 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 00:49:13.791404 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 00:49:13.801710 2164752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 00:49:13.801785 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 00:49:13.812128 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:49:13.823012 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 00:49:13.833374 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:49:13.843709 2164752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:49:13.853487 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 00:49:13.864274 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 00:49:13.874435 2164752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 00:49:13.884453 2164752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:49:13.893652 2164752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:49:13.902354 2164752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:49:13.998178 2164752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 00:49:14.131580 2164752 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0328 00:49:14.131677 2164752 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0328 00:49:14.135634 2164752 start.go:562] Will wait 60s for crictl version
	I0328 00:49:14.135750 2164752 ssh_runner.go:195] Run: which crictl
	I0328 00:49:14.139297 2164752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:49:14.183686 2164752 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0328 00:49:14.183805 2164752 ssh_runner.go:195] Run: containerd --version
	I0328 00:49:14.206537 2164752 ssh_runner.go:195] Run: containerd --version
	I0328 00:49:14.232464 2164752 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0328 00:49:16.825083 2154103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:49:16.839970 2154103 api_server.go:72] duration metric: took 6m2.51083602s to wait for apiserver process to appear ...
	I0328 00:49:16.840001 2154103 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:49:16.842533 2154103 out.go:177] 
	W0328 00:49:16.844576 2154103 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0328 00:49:16.844594 2154103 out.go:239] * 
	W0328 00:49:16.845547 2154103 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:49:16.847340 2154103 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	54f61494f5fe7       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   5ade5e29e5705       dashboard-metrics-scraper-8d5bb5db8-kdvzd
	4a61761262146       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   eca031667acc6       storage-provisioner
	13addc9bf1b1a       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   45b4a6e139ad5       kubernetes-dashboard-cd95d586-jqzw6
	901e2541b424a       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   743f0a479e772       busybox
	9b4aa3641a761       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   e0b29257d81b9       kube-proxy-nlm92
	1953997de19b3       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   eca031667acc6       storage-provisioner
	5bdbbfc1c62e7       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   dcb8400375c4b       kindnet-rvx2d
	4623300a67775       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   e901a71cdf1e6       coredns-74ff55c5b-fx44w
	7561c590526d8       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   39ebd5b54e97b       kube-apiserver-old-k8s-version-847679
	d3ae5f0db2652       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   27949b53515a2       kube-scheduler-old-k8s-version-847679
	a2bd91e7368c7       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   aed7accff835c       etcd-old-k8s-version-847679
	929efc17e1486       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   978da64b72484       kube-controller-manager-old-k8s-version-847679
	162e9abdb8db5       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   c4c15279b9713       busybox
	d5336e221d219       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   c00a115745aba       coredns-74ff55c5b-fx44w
	1031e991826f8       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   99c716c8d88db       kindnet-rvx2d
	2ae058eddf2e0       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   b55f395417eb0       kube-proxy-nlm92
	85e8dd9d85649       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   7eb5d5c782e4c       etcd-old-k8s-version-847679
	44eb23f06a2fe       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   1aaa94b9534c6       kube-scheduler-old-k8s-version-847679
	50e0290aaf17c       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   088b560212987       kube-controller-manager-old-k8s-version-847679
	c2e931cf7f511       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   14455a379658e       kube-apiserver-old-k8s-version-847679
	
	
	==> containerd <==
	Mar 28 00:45:07 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:07.255804248Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Mar 28 00:45:07 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:07.257460639Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.259197820Z" level=info msg="CreateContainer within sandbox \"5ade5e29e57051312b2ae5b25793bbd9b7d831dff7aaf4c0c7e2e83be8ed9bf2\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.286669207Z" level=info msg="CreateContainer within sandbox \"5ade5e29e57051312b2ae5b25793bbd9b7d831dff7aaf4c0c7e2e83be8ed9bf2\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"4ee8d118b0757d70dace6430aab1fe5948cfcc42eb3c58ac396bc4d4a69c81fc\""
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.287318269Z" level=info msg="StartContainer for \"4ee8d118b0757d70dace6430aab1fe5948cfcc42eb3c58ac396bc4d4a69c81fc\""
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.369314750Z" level=info msg="StartContainer for \"4ee8d118b0757d70dace6430aab1fe5948cfcc42eb3c58ac396bc4d4a69c81fc\" returns successfully"
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.399917612Z" level=info msg="shim disconnected" id=4ee8d118b0757d70dace6430aab1fe5948cfcc42eb3c58ac396bc4d4a69c81fc
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.399984360Z" level=warning msg="cleaning up after shim disconnected" id=4ee8d118b0757d70dace6430aab1fe5948cfcc42eb3c58ac396bc4d4a69c81fc namespace=k8s.io
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.399996618Z" level=info msg="cleaning up dead shim"
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.408484260Z" level=warning msg="cleanup warnings time=\"2024-03-28T00:45:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2942 runtime=io.containerd.runc.v2\n"
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.924087155Z" level=info msg="RemoveContainer for \"47054fa270c4ca13b769193c0467588e8198b7fff21a24c130e9af7b840c3189\""
	Mar 28 00:45:22 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:45:22.930305157Z" level=info msg="RemoveContainer for \"47054fa270c4ca13b769193c0467588e8198b7fff21a24c130e9af7b840c3189\" returns successfully"
	Mar 28 00:46:38 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:38.250478704Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 00:46:38 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:38.256068969Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Mar 28 00:46:38 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:38.258294759Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Mar 28 00:46:43 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:43.251421314Z" level=info msg="CreateContainer within sandbox \"5ade5e29e57051312b2ae5b25793bbd9b7d831dff7aaf4c0c7e2e83be8ed9bf2\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 28 00:46:43 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:43.268281643Z" level=info msg="CreateContainer within sandbox \"5ade5e29e57051312b2ae5b25793bbd9b7d831dff7aaf4c0c7e2e83be8ed9bf2\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02\""
	Mar 28 00:46:43 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:43.268990529Z" level=info msg="StartContainer for \"54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02\""
	Mar 28 00:46:43 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:43.352321819Z" level=info msg="StartContainer for \"54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02\" returns successfully"
	Mar 28 00:46:43 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:43.380234212Z" level=info msg="shim disconnected" id=54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:46:43 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:43.380323771Z" level=warning msg="cleaning up after shim disconnected" id=54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02 namespace=k8s.io
	Mar 28 00:46:43 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:43.380353383Z" level=info msg="cleaning up dead shim"
	Mar 28 00:46:43 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:43.388448132Z" level=warning msg="cleanup warnings time=\"2024-03-28T00:46:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3178 runtime=io.containerd.runc.v2\n"
	Mar 28 00:46:44 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:44.113157656Z" level=info msg="RemoveContainer for \"4ee8d118b0757d70dace6430aab1fe5948cfcc42eb3c58ac396bc4d4a69c81fc\""
	Mar 28 00:46:44 old-k8s-version-847679 containerd[568]: time="2024-03-28T00:46:44.119335692Z" level=info msg="RemoveContainer for \"4ee8d118b0757d70dace6430aab1fe5948cfcc42eb3c58ac396bc4d4a69c81fc\" returns successfully"
	
	
	==> coredns [4623300a67775e1f7651b2f936de95a5a551b9feae79c1bb227d7de184861a5e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:41244 - 12656 "HINFO IN 7001301198460263219.4507378155678052898. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012168733s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0328 00:44:07.115968       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-28 00:43:37.115180942 +0000 UTC m=+0.025802147) (total time: 30.000676478s):
	Trace[2019727887]: [30.000676478s] [30.000676478s] END
	E0328 00:44:07.116000       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0328 00:44:07.116221       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-28 00:43:37.115865138 +0000 UTC m=+0.026486344) (total time: 30.000330806s):
	Trace[939984059]: [30.000330806s] [30.000330806s] END
	E0328 00:44:07.116259       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0328 00:44:07.116405       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-28 00:43:37.116096267 +0000 UTC m=+0.026717481) (total time: 30.000292078s):
	Trace[911902081]: [30.000292078s] [30.000292078s] END
	E0328 00:44:07.116440       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d5336e221d2198e202649e40bf6fd781694bf14d3c81e1d7386a7f0f6afbf511] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:32908 - 11063 "HINFO IN 4488164244507893003.5637758481510865903. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026703646s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-847679
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-847679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=df52f6f8e24b930a4c903cebb17d11a580ef5873
	                    minikube.k8s.io/name=old-k8s-version-847679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T00_41_00_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:40:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-847679
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:49:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:44:25 +0000   Thu, 28 Mar 2024 00:40:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:44:25 +0000   Thu, 28 Mar 2024 00:40:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:44:25 +0000   Thu, 28 Mar 2024 00:40:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:44:25 +0000   Thu, 28 Mar 2024 00:41:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-847679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 24e8a93bb70543e480b4b27acdeb08b1
	  System UUID:                0f90d1b1-1803-48a2-b0cf-5f7620ecea85
	  Boot ID:                    561aadd0-a15d-4e78-9187-a38c38772b44
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 coredns-74ff55c5b-fx44w                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m3s
	  kube-system                 etcd-old-k8s-version-847679                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m10s
	  kube-system                 kindnet-rvx2d                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m3s
	  kube-system                 kube-apiserver-old-k8s-version-847679             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-controller-manager-old-k8s-version-847679    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-proxy-nlm92                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-old-k8s-version-847679             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 metrics-server-9975d5f86-8p64b                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m27s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-kdvzd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-jqzw6               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m30s (x5 over 8m30s)  kubelet     Node old-k8s-version-847679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s (x4 over 8m30s)  kubelet     Node old-k8s-version-847679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s (x4 over 8m30s)  kubelet     Node old-k8s-version-847679 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m10s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m10s                  kubelet     Node old-k8s-version-847679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m10s                  kubelet     Node old-k8s-version-847679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m10s                  kubelet     Node old-k8s-version-847679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m10s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m3s                   kubelet     Node old-k8s-version-847679 status is now: NodeReady
	  Normal  Starting                 8m2s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-847679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x7 over 5m56s)  kubelet     Node old-k8s-version-847679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-847679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m39s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000760] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001041] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=0000000016229697
	[  +0.001167] FS-Cache: N-key=[8] '04455c0100000000'
	[  +0.002880] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001087] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=0000000004f60fee
	[  +0.001011] FS-Cache: O-key=[8] '04455c0100000000'
	[  +0.000781] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=0000000066003731
	[  +0.001059] FS-Cache: N-key=[8] '04455c0100000000'
	[  +2.722142] FS-Cache: Duplicate cookie detected
	[  +0.000813] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=0000000010f04798
	[  +0.001050] FS-Cache: O-key=[8] '03455c0100000000'
	[  +0.000751] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.000953] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=0000000016229697
	[  +0.001085] FS-Cache: N-key=[8] '03455c0100000000'
	[  +0.338762] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=00000000feeb1004
	[  +0.001139] FS-Cache: O-key=[8] '09455c0100000000'
	[  +0.000826] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000cfaea348
	[  +0.001051] FS-Cache: N-key=[8] '09455c0100000000'
	[Mar28 00:13] IPVS: rr: TCP 192.168.49.254:8443 - no destination available
	
	
	==> etcd [85e8dd9d85649d6c95b16e4c7a6123db57e7cb4e760c671b3cbff82b373d3757] <==
	2024-03-28 00:40:49.771262 I | etcdserver/membership: added member 9f0758e1c58a86ed [https://192.168.85.2:2380] to cluster 68eaea490fab4e05
	raft2024/03/28 00:40:49 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/03/28 00:40:49 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/03/28 00:40:49 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/03/28 00:40:49 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/03/28 00:40:49 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-03-28 00:40:49.813319 I | etcdserver: published {Name:old-k8s-version-847679 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-03-28 00:40:49.815371 I | embed: ready to serve client requests
	2024-03-28 00:40:49.815573 I | embed: ready to serve client requests
	2024-03-28 00:40:49.817398 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-28 00:40:49.821754 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-28 00:40:49.822427 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-28 00:40:49.822641 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-28 00:40:49.824188 I | embed: serving client requests on 192.168.85.2:2379
	2024-03-28 00:41:17.079959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:41:18.795706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:41:28.795754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:41:38.795821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:41:48.795850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:41:58.795819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:42:08.797570 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:42:18.795649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:42:28.796176 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:42:38.795867 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:42:48.795793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [a2bd91e7368c7989679c37e7fd6815280fab1c94957b5cd07c5e6a84f2b1e6ff] <==
	2024-03-28 00:45:17.883560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:45:27.883662 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:45:37.883435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:45:47.883509 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:45:57.883534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:46:07.883610 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:46:17.883534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:46:27.883476 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:46:37.883485 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:46:47.883482 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:46:57.883566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:47:07.883573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:47:17.883451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:47:27.883620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:47:37.883518 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:47:47.883490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:47:57.883417 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:48:07.883382 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:48:17.883527 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:48:27.883512 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:48:37.883607 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:48:47.883432 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:48:57.883557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:49:07.886164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 00:49:17.883676 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 00:49:18 up  8:31,  0 users,  load average: 0.91, 1.85, 2.51
	Linux old-k8s-version-847679 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [1031e991826f8e2e38115ee7cfd25869c0e2fb8310012117196ad1c7df7997a4] <==
	I0328 00:41:16.563803       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 00:41:16.563865       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0328 00:41:16.563963       1 main.go:116] setting mtu 1500 for CNI 
	I0328 00:41:16.563976       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 00:41:16.563997       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 00:41:46.788345       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 00:41:46.802644       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:41:46.802675       1 main.go:227] handling current node
	I0328 00:41:56.821009       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:41:56.821038       1 main.go:227] handling current node
	I0328 00:42:06.837830       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:42:06.837860       1 main.go:227] handling current node
	I0328 00:42:16.850755       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:42:16.850783       1 main.go:227] handling current node
	I0328 00:42:26.867829       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:42:26.867860       1 main.go:227] handling current node
	I0328 00:42:36.883230       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:42:36.883263       1 main.go:227] handling current node
	I0328 00:42:46.887521       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:42:46.887549       1 main.go:227] handling current node
	
	
	==> kindnet [5bdbbfc1c62e7b4e088ee2f8e1a88d1391a34b9e9268e51a02ee859dba211cd0] <==
	I0328 00:47:18.127798       1 main.go:227] handling current node
	I0328 00:47:28.144459       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:47:28.144762       1 main.go:227] handling current node
	I0328 00:47:38.153229       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:47:38.153257       1 main.go:227] handling current node
	I0328 00:47:48.160924       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:47:48.160955       1 main.go:227] handling current node
	I0328 00:47:58.178412       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:47:58.178439       1 main.go:227] handling current node
	I0328 00:48:08.187197       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:48:08.187226       1 main.go:227] handling current node
	I0328 00:48:18.202100       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:48:18.202248       1 main.go:227] handling current node
	I0328 00:48:28.208864       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:48:28.208895       1 main.go:227] handling current node
	I0328 00:48:38.219554       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:48:38.219689       1 main.go:227] handling current node
	I0328 00:48:48.235796       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:48:48.235825       1 main.go:227] handling current node
	I0328 00:48:58.242961       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:48:58.242993       1 main.go:227] handling current node
	I0328 00:49:08.258750       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:49:08.258780       1 main.go:227] handling current node
	I0328 00:49:18.269324       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 00:49:18.269360       1 main.go:227] handling current node
	
	
	==> kube-apiserver [7561c590526d8e9e29f4407a8f912b1d46095c849d177c54397bc1ba65755210] <==
	I0328 00:46:00.255473       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:46:00.255483       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 00:46:32.918024       1 client.go:360] parsed scheme: "passthrough"
	I0328 00:46:32.918242       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:46:32.918260       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0328 00:46:37.865618       1 handler_proxy.go:102] no RequestInfo found in the context
	E0328 00:46:37.865850       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 00:46:37.865870       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0328 00:47:09.082140       1 client.go:360] parsed scheme: "passthrough"
	I0328 00:47:09.082225       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:47:09.082244       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 00:47:47.711503       1 client.go:360] parsed scheme: "passthrough"
	I0328 00:47:47.711546       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:47:47.711556       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 00:48:20.348377       1 client.go:360] parsed scheme: "passthrough"
	I0328 00:48:20.348416       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:48:20.348426       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0328 00:48:35.876682       1 handler_proxy.go:102] no RequestInfo found in the context
	E0328 00:48:35.876765       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 00:48:35.876916       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0328 00:48:51.863726       1 client.go:360] parsed scheme: "passthrough"
	I0328 00:48:51.863770       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:48:51.863779       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [c2e931cf7f5113d558fdfe3d5cb0a4c19f14cb714e95b875f13b0c9726b90a7f] <==
	I0328 00:40:57.471245       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0328 00:40:57.498835       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0328 00:40:57.503502       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0328 00:40:57.503528       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0328 00:40:57.952258       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 00:40:57.993899       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0328 00:40:58.145322       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0328 00:40:58.146398       1 controller.go:606] quota admission added evaluator for: endpoints
	I0328 00:40:58.152072       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 00:40:59.094061       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0328 00:40:59.868572       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0328 00:40:59.973486       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0328 00:41:08.300383       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:41:15.454964       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0328 00:41:15.669846       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0328 00:41:28.835499       1 client.go:360] parsed scheme: "passthrough"
	I0328 00:41:28.835545       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:41:28.835554       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 00:42:03.232669       1 client.go:360] parsed scheme: "passthrough"
	I0328 00:42:03.232716       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:42:03.232730       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 00:42:40.072195       1 client.go:360] parsed scheme: "passthrough"
	I0328 00:42:40.072328       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 00:42:40.072347       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0328 00:42:50.731525       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [50e0290aaf17c8bff72a3fc7ab35c88f89433822d4a82ced818c0514a00be34c] <==
	E0328 00:41:15.633030       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0328 00:41:15.635782       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-ttphm"
	I0328 00:41:15.642132       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0328 00:41:15.691145       1 shared_informer.go:247] Caches are synced for attach detach 
	I0328 00:41:15.691152       1 shared_informer.go:247] Caches are synced for stateful set 
	I0328 00:41:15.694085       1 shared_informer.go:247] Caches are synced for job 
	I0328 00:41:15.707812       1 shared_informer.go:247] Caches are synced for resource quota 
	I0328 00:41:15.711070       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nlm92"
	I0328 00:41:15.725513       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rvx2d"
	I0328 00:41:15.731464       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0328 00:41:15.744277       1 shared_informer.go:247] Caches are synced for resource quota 
	I0328 00:41:15.770083       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0328 00:41:15.773589       1 shared_informer.go:247] Caches are synced for expand 
	E0328 00:41:15.784258       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"805fde42-9dbf-4da0-8a28-c68754e042ea", ResourceVersion:"267", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63847183259, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40017ffee0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40017fff00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x40017fff20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001a4ed40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017ff
f40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017fff60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017fffa0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001aa0240), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001a0f6e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004c4bd0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000770900)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001a0f738)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0328 00:41:15.824607       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"0d8ceb2c-fa02-449a-a999-64eeefa4af21", ResourceVersion:"285", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63847183260, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001978000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001978020)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001978040), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001978060), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001978080), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019780a0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40019780c0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001978100)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001aa02a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001a0f938), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004c4c40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000770908)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001a0f980)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0328 00:41:15.827423       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"805fde42-9dbf-4da0-8a28-c68754e042ea", ResourceVersion:"407", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63847183259, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7f980), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7f9a0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7f9c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7f9e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001d7fa00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001e9c340), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d7fa20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d7fa40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d7fa80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001d27aa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001ea81f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f69a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001b991d8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001ea8248)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0328 00:41:15.856819       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0328 00:41:16.146982       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0328 00:41:16.147004       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0328 00:41:16.157019       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0328 00:41:17.290855       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0328 00:41:17.330342       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-ttphm"
	I0328 00:41:20.607102       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0328 00:42:50.460501       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0328 00:42:50.604548       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [929efc17e1486e2251236d64f99aebf2fb32a368cd3c804c87c715e4efc9e378] <==
	W0328 00:44:58.680617       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 00:45:24.004948       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 00:45:30.331154       1 request.go:655] Throttling request took 1.048317375s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0328 00:45:31.182662       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 00:45:54.509041       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 00:46:02.833150       1 request.go:655] Throttling request took 1.048401551s, request: GET:https://192.168.85.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0328 00:46:03.684574       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 00:46:25.012557       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 00:46:35.334918       1 request.go:655] Throttling request took 1.048129798s, request: GET:https://192.168.85.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0328 00:46:36.226220       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 00:46:55.514658       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 00:47:07.876843       1 request.go:655] Throttling request took 1.048365308s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0328 00:47:08.728555       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 00:47:26.016964       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 00:47:40.379064       1 request.go:655] Throttling request took 1.04843949s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0328 00:47:41.230576       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 00:47:56.518704       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 00:48:12.881005       1 request.go:655] Throttling request took 1.048465426s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
	W0328 00:48:13.732485       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 00:48:27.020686       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 00:48:45.383036       1 request.go:655] Throttling request took 1.04817623s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0328 00:48:46.234368       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 00:48:57.568414       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 00:49:17.884892       1 request.go:655] Throttling request took 1.047772327s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0328 00:49:18.737629       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [2ae058eddf2e02ba2761560b281c435e127c3e0bccd6a2b72fc79c217c23b874] <==
	I0328 00:41:16.605631       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0328 00:41:16.605731       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0328 00:41:16.696573       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0328 00:41:16.696674       1 server_others.go:185] Using iptables Proxier.
	I0328 00:41:16.696917       1 server.go:650] Version: v1.20.0
	I0328 00:41:16.699841       1 config.go:315] Starting service config controller
	I0328 00:41:16.699869       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0328 00:41:16.707123       1 config.go:224] Starting endpoint slice config controller
	I0328 00:41:16.707138       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0328 00:41:16.800035       1 shared_informer.go:247] Caches are synced for service config 
	I0328 00:41:16.813151       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [9b4aa3641a761e98ead7ecdb26ec483b4c71d90133d1df4a858844137d457d00] <==
	I0328 00:43:38.989613       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0328 00:43:38.989715       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0328 00:43:39.012878       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0328 00:43:39.013258       1 server_others.go:185] Using iptables Proxier.
	I0328 00:43:39.013646       1 server.go:650] Version: v1.20.0
	I0328 00:43:39.014865       1 config.go:315] Starting service config controller
	I0328 00:43:39.015099       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0328 00:43:39.023865       1 config.go:224] Starting endpoint slice config controller
	I0328 00:43:39.024070       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0328 00:43:39.115413       1 shared_informer.go:247] Caches are synced for service config 
	I0328 00:43:39.124308       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [44eb23f06a2fe2a91f759a876119a1e22082e1b816c40eef2bf31115998ee0dd] <==
	W0328 00:40:56.643067       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:40:56.643076       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 00:40:56.643082       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 00:40:56.698592       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0328 00:40:56.698941       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:40:56.702064       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:40:56.699434       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0328 00:40:56.704864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 00:40:56.705123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 00:40:56.705310       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 00:40:56.704277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 00:40:56.704371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 00:40:56.706145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 00:40:56.706438       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 00:40:56.706670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 00:40:56.706996       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 00:40:56.707271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 00:40:56.707501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 00:40:56.709017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 00:40:57.560188       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 00:40:57.568213       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 00:40:57.666656       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 00:40:57.707877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 00:40:57.792165       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 00:40:58.302388       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [d3ae5f0db2652c8e6833525020feeff467fa848d01cf3f8597353caf9be671dc] <==
	I0328 00:43:29.106296       1 serving.go:331] Generated self-signed cert in-memory
	W0328 00:43:34.679781       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 00:43:34.679820       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:43:34.681946       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 00:43:34.681972       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 00:43:34.849029       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0328 00:43:34.849218       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:43:34.849226       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:43:34.849238       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0328 00:43:35.061064       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 28 00:47:34 old-k8s-version-847679 kubelet[664]: E0328 00:47:34.250682     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:47:44 old-k8s-version-847679 kubelet[664]: E0328 00:47:44.250936     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 00:47:45 old-k8s-version-847679 kubelet[664]: I0328 00:47:45.249659     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:47:45 old-k8s-version-847679 kubelet[664]: E0328 00:47:45.250327     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:47:55 old-k8s-version-847679 kubelet[664]: E0328 00:47:55.253899     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 00:47:58 old-k8s-version-847679 kubelet[664]: I0328 00:47:58.249065     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:47:58 old-k8s-version-847679 kubelet[664]: E0328 00:47:58.249397     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:48:07 old-k8s-version-847679 kubelet[664]: E0328 00:48:07.249817     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 00:48:13 old-k8s-version-847679 kubelet[664]: I0328 00:48:13.249095     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:48:13 old-k8s-version-847679 kubelet[664]: E0328 00:48:13.249493     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:48:21 old-k8s-version-847679 kubelet[664]: E0328 00:48:21.249539     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 00:48:24 old-k8s-version-847679 kubelet[664]: I0328 00:48:24.249253     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:48:24 old-k8s-version-847679 kubelet[664]: E0328 00:48:24.250324     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:48:35 old-k8s-version-847679 kubelet[664]: E0328 00:48:35.249831     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 00:48:37 old-k8s-version-847679 kubelet[664]: I0328 00:48:37.248997     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:48:37 old-k8s-version-847679 kubelet[664]: E0328 00:48:37.249346     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:48:49 old-k8s-version-847679 kubelet[664]: E0328 00:48:49.249750     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 00:48:52 old-k8s-version-847679 kubelet[664]: I0328 00:48:52.250236     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:48:52 old-k8s-version-847679 kubelet[664]: E0328 00:48:52.251036     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:49:02 old-k8s-version-847679 kubelet[664]: E0328 00:49:02.250400     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 00:49:04 old-k8s-version-847679 kubelet[664]: I0328 00:49:04.249080     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:49:04 old-k8s-version-847679 kubelet[664]: E0328 00:49:04.249958     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:49:15 old-k8s-version-847679 kubelet[664]: I0328 00:49:15.248984     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 54f61494f5fe74845ecf049eed2fb9353ae0d7b4d382f980c8c525c08bafbb02
	Mar 28 00:49:15 old-k8s-version-847679 kubelet[664]: E0328 00:49:15.249877     664 pod_workers.go:191] Error syncing pod ca034678-ff67-4235-b413-dcc5a40793ca ("dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdvzd_kubernetes-dashboard(ca034678-ff67-4235-b413-dcc5a40793ca)"
	Mar 28 00:49:15 old-k8s-version-847679 kubelet[664]: E0328 00:49:15.251321     664 pod_workers.go:191] Error syncing pod 622eacec-5f04-4f0a-835f-0d780fcfcea5 ("metrics-server-9975d5f86-8p64b_kube-system(622eacec-5f04-4f0a-835f-0d780fcfcea5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [13addc9bf1b1abd9fc750480ab07284f1441f239613450d560c6f43b873ad3d4] <==
	2024/03/28 00:43:56 Using namespace: kubernetes-dashboard
	2024/03/28 00:43:56 Using in-cluster config to connect to apiserver
	2024/03/28 00:43:56 Using secret token for csrf signing
	2024/03/28 00:43:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/28 00:43:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/28 00:43:56 Successful initial request to the apiserver, version: v1.20.0
	2024/03/28 00:43:56 Generating JWE encryption key
	2024/03/28 00:43:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/28 00:43:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/28 00:43:56 Initializing JWE encryption key from synchronized object
	2024/03/28 00:43:56 Creating in-cluster Sidecar client
	2024/03/28 00:43:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:43:56 Serving insecurely on HTTP port: 9090
	2024/03/28 00:44:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:44:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:45:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:45:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:46:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:46:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:47:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:47:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:48:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:48:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 00:43:56 Starting overwatch
	
	
	==> storage-provisioner [1953997de19b3626051730f74af1c0d5ef9137f914a4237e052b6767f3a9d819] <==
	I0328 00:43:37.650204       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0328 00:44:07.652340       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4a61761262146a24c8b48240811e0993245eb062b5fd452de9240b0edbe93fd3] <==
	I0328 00:44:24.449507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 00:44:24.466451       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 00:44:24.466705       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 00:44:41.956465       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 00:44:41.957061       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2827b4e7-f7ce-4bef-a24a-10b530d8e57f", APIVersion:"v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-847679_83c56f54-43c8-4b56-864c-df95249d2df5 became leader
	I0328 00:44:41.957160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-847679_83c56f54-43c8-4b56-864c-df95249d2df5!
	I0328 00:44:42.057340       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-847679_83c56f54-43c8-4b56-864c-df95249d2df5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-847679 -n old-k8s-version-847679
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-847679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-8p64b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-847679 describe pod metrics-server-9975d5f86-8p64b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-847679 describe pod metrics-server-9975d5f86-8p64b: exit status 1 (148.257777ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-8p64b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-847679 describe pod metrics-server-9975d5f86-8p64b: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.48s)

                                                
                                    

Test pass (296/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.97
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.29.3/json-events 8.78
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.09
18 TestDownloadOnly/v1.29.3/DeleteAll 0.19
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-beta.0/json-events 9.66
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.14
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.12
36 TestAddons/Setup 118.4
38 TestAddons/parallel/Registry 15.55
40 TestAddons/parallel/InspektorGadget 10.77
41 TestAddons/parallel/MetricsServer 7.02
44 TestAddons/parallel/CSI 68.05
46 TestAddons/parallel/CloudSpanner 5.74
47 TestAddons/parallel/LocalPath 51.6
48 TestAddons/parallel/NvidiaDevicePlugin 5.55
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.19
54 TestCertOptions 38.92
55 TestCertExpiration 228.95
57 TestForceSystemdFlag 39.64
58 TestForceSystemdEnv 40.19
59 TestDockerEnvContainerd 44.87
64 TestErrorSpam/setup 32.07
65 TestErrorSpam/start 0.75
66 TestErrorSpam/status 1.02
67 TestErrorSpam/pause 1.62
68 TestErrorSpam/unpause 1.77
69 TestErrorSpam/stop 1.43
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 79.48
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.56
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.11
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.27
81 TestFunctional/serial/CacheCmd/cache/add_local 1.46
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
86 TestFunctional/serial/CacheCmd/cache/delete 0.15
87 TestFunctional/serial/MinikubeKubectlCmd 0.16
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 43.54
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.67
92 TestFunctional/serial/LogsFileCmd 1.69
93 TestFunctional/serial/InvalidService 4.07
95 TestFunctional/parallel/ConfigCmd 0.58
96 TestFunctional/parallel/DashboardCmd 8.76
97 TestFunctional/parallel/DryRun 0.48
98 TestFunctional/parallel/InternationalLanguage 0.24
99 TestFunctional/parallel/StatusCmd 1.23
103 TestFunctional/parallel/ServiceCmdConnect 9.79
104 TestFunctional/parallel/AddonsCmd 0.21
105 TestFunctional/parallel/PersistentVolumeClaim 27.06
107 TestFunctional/parallel/SSHCmd 0.66
108 TestFunctional/parallel/CpCmd 2.33
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 2.02
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
119 TestFunctional/parallel/License 0.35
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.29
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
133 TestFunctional/parallel/ProfileCmd/profile_list 0.47
134 TestFunctional/parallel/ServiceCmd/List 0.6
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
137 TestFunctional/parallel/MountCmd/any-port 7.4
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
139 TestFunctional/parallel/ServiceCmd/Format 0.43
140 TestFunctional/parallel/ServiceCmd/URL 0.48
141 TestFunctional/parallel/MountCmd/specific-port 2.17
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.76
143 TestFunctional/parallel/Version/short 0.09
144 TestFunctional/parallel/Version/components 1.32
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
150 TestFunctional/parallel/ImageCommands/Setup 1.79
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/StartCluster 125.57
168 TestMultiControlPlane/serial/DeployApp 20.51
169 TestMultiControlPlane/serial/PingHostFromPods 1.72
170 TestMultiControlPlane/serial/AddWorkerNode 23.22
171 TestMultiControlPlane/serial/NodeLabels 0.12
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
173 TestMultiControlPlane/serial/CopyFile 19.4
174 TestMultiControlPlane/serial/StopSecondaryNode 12.87
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
176 TestMultiControlPlane/serial/RestartSecondaryNode 30.41
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.73
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 120.65
179 TestMultiControlPlane/serial/DeleteSecondaryNode 10.22
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
181 TestMultiControlPlane/serial/StopCluster 35.96
182 TestMultiControlPlane/serial/RestartCluster 82.14
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
184 TestMultiControlPlane/serial/AddSecondaryNode 44.43
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
189 TestJSONOutput/start/Command 88.64
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.79
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.67
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.74
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 42.36
215 TestKicCustomNetwork/use_default_bridge_network 35.85
216 TestKicExistingNetwork 34.68
217 TestKicCustomSubnet 35.98
218 TestKicStaticIP 34.92
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 68.25
223 TestMountStart/serial/StartWithMountFirst 6.34
224 TestMountStart/serial/VerifyMountFirst 0.28
225 TestMountStart/serial/StartWithMountSecond 5.8
226 TestMountStart/serial/VerifyMountSecond 0.25
227 TestMountStart/serial/DeleteFirst 1.58
228 TestMountStart/serial/VerifyMountPostDelete 0.25
229 TestMountStart/serial/Stop 1.2
230 TestMountStart/serial/RestartStopped 7.34
231 TestMountStart/serial/VerifyMountPostStop 0.25
234 TestMultiNode/serial/FreshStart2Nodes 75.7
235 TestMultiNode/serial/DeployApp2Nodes 32.21
236 TestMultiNode/serial/PingHostFrom2Pods 1.11
237 TestMultiNode/serial/AddNode 16.49
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.33
240 TestMultiNode/serial/CopyFile 9.89
241 TestMultiNode/serial/StopNode 2.22
242 TestMultiNode/serial/StartAfterStop 9.05
243 TestMultiNode/serial/RestartKeepsNodes 80.55
244 TestMultiNode/serial/DeleteNode 5.31
245 TestMultiNode/serial/StopMultiNode 23.98
246 TestMultiNode/serial/RestartMultiNode 49.85
247 TestMultiNode/serial/ValidateNameConflict 33.88
252 TestPreload 108.35
254 TestScheduledStopUnix 107.18
257 TestInsufficientStorage 10.1
258 TestRunningBinaryUpgrade 84.04
260 TestKubernetesUpgrade 373.23
261 TestMissingContainerUpgrade 159.57
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
264 TestNoKubernetes/serial/StartWithK8s 39.38
265 TestNoKubernetes/serial/StartWithStopK8s 16.49
266 TestNoKubernetes/serial/Start 5.96
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
268 TestNoKubernetes/serial/ProfileList 1.01
269 TestNoKubernetes/serial/Stop 1.27
270 TestNoKubernetes/serial/StartNoArgs 7.86
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
272 TestStoppedBinaryUpgrade/Setup 1.19
273 TestStoppedBinaryUpgrade/Upgrade 112.06
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.45
283 TestPause/serial/Start 92.13
284 TestPause/serial/SecondStartNoReconfiguration 7.36
285 TestPause/serial/Pause 0.92
286 TestPause/serial/VerifyStatus 0.39
287 TestPause/serial/Unpause 0.85
288 TestPause/serial/PauseAgain 1.04
289 TestPause/serial/DeletePaused 2.81
290 TestPause/serial/VerifyDeletedResources 0.42
298 TestNetworkPlugins/group/false 6.25
303 TestStartStop/group/old-k8s-version/serial/FirstStart 146.26
304 TestStartStop/group/old-k8s-version/serial/DeployApp 10.08
306 TestStartStop/group/no-preload/serial/FirstStart 70.45
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.78
308 TestStartStop/group/old-k8s-version/serial/Stop 14.9
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
311 TestStartStop/group/no-preload/serial/DeployApp 9.42
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
313 TestStartStop/group/no-preload/serial/Stop 12.07
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
315 TestStartStop/group/no-preload/serial/SecondStart 266.65
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
319 TestStartStop/group/no-preload/serial/Pause 3.4
321 TestStartStop/group/embed-certs/serial/FirstStart 81.61
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
325 TestStartStop/group/old-k8s-version/serial/Pause 3.55
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 90.89
328 TestStartStop/group/embed-certs/serial/DeployApp 7.42
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
330 TestStartStop/group/embed-certs/serial/Stop 12.06
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
332 TestStartStop/group/embed-certs/serial/SecondStart 281.27
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.47
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.16
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 279.38
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/embed-certs/serial/Pause 3.13
343 TestStartStop/group/newest-cni/serial/FirstStart 47.35
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.35
348 TestNetworkPlugins/group/auto/Start 87.94
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.4
351 TestStartStop/group/newest-cni/serial/Stop 3.16
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.35
353 TestStartStop/group/newest-cni/serial/SecondStart 24.7
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
357 TestStartStop/group/newest-cni/serial/Pause 3.59
358 TestNetworkPlugins/group/kindnet/Start 92.36
359 TestNetworkPlugins/group/auto/KubeletFlags 0.33
360 TestNetworkPlugins/group/auto/NetCatPod 9.31
361 TestNetworkPlugins/group/auto/DNS 0.19
362 TestNetworkPlugins/group/auto/Localhost 0.18
363 TestNetworkPlugins/group/auto/HairPin 0.16
364 TestNetworkPlugins/group/calico/Start 75.5
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.51
368 TestNetworkPlugins/group/kindnet/DNS 0.34
369 TestNetworkPlugins/group/kindnet/Localhost 0.23
370 TestNetworkPlugins/group/kindnet/HairPin 0.2
371 TestNetworkPlugins/group/custom-flannel/Start 67.72
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.51
374 TestNetworkPlugins/group/calico/NetCatPod 11.38
375 TestNetworkPlugins/group/calico/DNS 0.24
376 TestNetworkPlugins/group/calico/Localhost 0.18
377 TestNetworkPlugins/group/calico/HairPin 0.21
378 TestNetworkPlugins/group/enable-default-cni/Start 92.46
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.35
381 TestNetworkPlugins/group/custom-flannel/DNS 0.27
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
384 TestNetworkPlugins/group/flannel/Start 64.23
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.42
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
392 TestNetworkPlugins/group/flannel/NetCatPod 10.35
393 TestNetworkPlugins/group/flannel/DNS 0.3
394 TestNetworkPlugins/group/flannel/Localhost 0.21
395 TestNetworkPlugins/group/flannel/HairPin 0.19
396 TestNetworkPlugins/group/bridge/Start 49.34
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
398 TestNetworkPlugins/group/bridge/NetCatPod 8.26
399 TestNetworkPlugins/group/bridge/DNS 0.18
400 TestNetworkPlugins/group/bridge/Localhost 0.15
401 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (8.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-136920 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-136920 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.965484046s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-136920
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-136920: exit status 85 (78.958191ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-136920 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:55 UTC |          |
	|         | -p download-only-136920        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:55:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:55:54.793148 1957146 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:55:54.793335 1957146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:55:54.793363 1957146 out.go:304] Setting ErrFile to fd 2...
	I0327 23:55:54.793386 1957146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:55:54.793655 1957146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	W0327 23:55:54.793829 1957146 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18158-1951721/.minikube/config/config.json: open /home/jenkins/minikube-integration/18158-1951721/.minikube/config/config.json: no such file or directory
	I0327 23:55:54.794299 1957146 out.go:298] Setting JSON to true
	I0327 23:55:54.795213 1957146 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27493,"bootTime":1711556262,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 23:55:54.795315 1957146 start.go:139] virtualization:  
	I0327 23:55:54.797864 1957146 out.go:97] [download-only-136920] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 23:55:54.799774 1957146 out.go:169] MINIKUBE_LOCATION=18158
	W0327 23:55:54.798053 1957146 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 23:55:54.798100 1957146 notify.go:220] Checking for updates...
	I0327 23:55:54.801643 1957146 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:55:54.803222 1957146 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0327 23:55:54.805041 1957146 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0327 23:55:54.806658 1957146 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 23:55:54.809565 1957146 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:55:54.809854 1957146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:55:54.828216 1957146 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 23:55:54.828325 1957146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:55:54.897261 1957146 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 23:55:54.888627925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:55:54.897365 1957146 docker.go:295] overlay module found
	I0327 23:55:54.899347 1957146 out.go:97] Using the docker driver based on user configuration
	I0327 23:55:54.899373 1957146 start.go:297] selected driver: docker
	I0327 23:55:54.899380 1957146 start.go:901] validating driver "docker" against <nil>
	I0327 23:55:54.899484 1957146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:55:54.953687 1957146 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 23:55:54.944654101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:55:54.953860 1957146 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:55:54.954181 1957146 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 23:55:54.954336 1957146 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:55:54.956300 1957146 out.go:169] Using Docker driver with root privileges
	I0327 23:55:54.958155 1957146 cni.go:84] Creating CNI manager for ""
	I0327 23:55:54.958173 1957146 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:55:54.958191 1957146 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 23:55:54.958262 1957146 start.go:340] cluster config:
	{Name:download-only-136920 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-136920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:55:54.960553 1957146 out.go:97] Starting "download-only-136920" primary control-plane node in "download-only-136920" cluster
	I0327 23:55:54.960572 1957146 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 23:55:54.962131 1957146 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 23:55:54.962157 1957146 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 23:55:54.962268 1957146 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 23:55:54.975835 1957146 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 23:55:54.976022 1957146 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 23:55:54.976138 1957146 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 23:55:55.061758 1957146 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0327 23:55:55.061784 1957146 cache.go:56] Caching tarball of preloaded images
	I0327 23:55:55.062303 1957146 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 23:55:55.064713 1957146 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 23:55:55.064738 1957146 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0327 23:55:55.263661 1957146 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-136920 host does not exist
	  To start a cluster, run: "minikube start -p download-only-136920"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-136920
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (8.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-223414 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-223414 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.777883634s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (8.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-223414
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-223414: exit status 85 (84.563314ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-136920 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:55 UTC |                     |
	|         | -p download-only-136920        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-136920        | download-only-136920 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| start   | -o=json --download-only        | download-only-223414 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | -p download-only-223414        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:56:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:56:04.183985 1957315 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:56:04.184135 1957315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:56:04.184141 1957315 out.go:304] Setting ErrFile to fd 2...
	I0327 23:56:04.184145 1957315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:56:04.184419 1957315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0327 23:56:04.184818 1957315 out.go:298] Setting JSON to true
	I0327 23:56:04.185729 1957315 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27502,"bootTime":1711556262,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 23:56:04.185807 1957315 start.go:139] virtualization:  
	I0327 23:56:04.188310 1957315 out.go:97] [download-only-223414] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 23:56:04.190402 1957315 out.go:169] MINIKUBE_LOCATION=18158
	I0327 23:56:04.188593 1957315 notify.go:220] Checking for updates...
	I0327 23:56:04.192489 1957315 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:56:04.194176 1957315 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0327 23:56:04.196102 1957315 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0327 23:56:04.197975 1957315 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 23:56:04.201347 1957315 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:56:04.201740 1957315 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:56:04.223572 1957315 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 23:56:04.223705 1957315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:56:04.291806 1957315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-03-27 23:56:04.281391692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:56:04.291919 1957315 docker.go:295] overlay module found
	I0327 23:56:04.294075 1957315 out.go:97] Using the docker driver based on user configuration
	I0327 23:56:04.294112 1957315 start.go:297] selected driver: docker
	I0327 23:56:04.294119 1957315 start.go:901] validating driver "docker" against <nil>
	I0327 23:56:04.294249 1957315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:56:04.348489 1957315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-03-27 23:56:04.338111156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:56:04.348658 1957315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:56:04.348941 1957315 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 23:56:04.349128 1957315 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:56:04.351473 1957315 out.go:169] Using Docker driver with root privileges
	I0327 23:56:04.353541 1957315 cni.go:84] Creating CNI manager for ""
	I0327 23:56:04.353568 1957315 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:56:04.353578 1957315 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 23:56:04.353670 1957315 start.go:340] cluster config:
	{Name:download-only-223414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-223414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:56:04.356053 1957315 out.go:97] Starting "download-only-223414" primary control-plane node in "download-only-223414" cluster
	I0327 23:56:04.356082 1957315 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 23:56:04.357981 1957315 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 23:56:04.358007 1957315 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 23:56:04.358170 1957315 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 23:56:04.370651 1957315 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 23:56:04.370776 1957315 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 23:56:04.370802 1957315 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 23:56:04.370807 1957315 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 23:56:04.370819 1957315 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 23:56:04.468454 1957315 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0327 23:56:04.468482 1957315 cache.go:56] Caching tarball of preloaded images
	I0327 23:56:04.468674 1957315 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 23:56:04.470957 1957315 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 23:56:04.470986 1957315 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 ...
	I0327 23:56:04.626190 1957315 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:663a9a795decbfebeb48b89f3f24d179 -> /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-223414 host does not exist
	  To start a cluster, run: "minikube start -p download-only-223414"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-223414
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (9.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-984922 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-984922 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.658131538s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (9.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-984922
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-984922: exit status 85 (77.867261ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-136920 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:55 UTC |                     |
	|         | -p download-only-136920             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-136920             | download-only-136920 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| start   | -o=json --download-only             | download-only-223414 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | -p download-only-223414             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete  | -p download-only-223414             | download-only-223414 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| start   | -o=json --download-only             | download-only-984922 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC |                     |
	|         | -p download-only-984922             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:56:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:56:13.364385 1957481 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:56:13.364567 1957481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:56:13.364577 1957481 out.go:304] Setting ErrFile to fd 2...
	I0327 23:56:13.364582 1957481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:56:13.364836 1957481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0327 23:56:13.365220 1957481 out.go:298] Setting JSON to true
	I0327 23:56:13.366078 1957481 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27511,"bootTime":1711556262,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 23:56:13.366148 1957481 start.go:139] virtualization:  
	I0327 23:56:13.368725 1957481 out.go:97] [download-only-984922] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 23:56:13.370476 1957481 out.go:169] MINIKUBE_LOCATION=18158
	I0327 23:56:13.369049 1957481 notify.go:220] Checking for updates...
	I0327 23:56:13.374093 1957481 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:56:13.376154 1957481 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0327 23:56:13.377861 1957481 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0327 23:56:13.379628 1957481 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 23:56:13.383071 1957481 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:56:13.383379 1957481 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:56:13.402498 1957481 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 23:56:13.402599 1957481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:56:13.452532 1957481 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 23:56:13.443629962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:56:13.452636 1957481 docker.go:295] overlay module found
	I0327 23:56:13.454356 1957481 out.go:97] Using the docker driver based on user configuration
	I0327 23:56:13.454389 1957481 start.go:297] selected driver: docker
	I0327 23:56:13.454395 1957481 start.go:901] validating driver "docker" against <nil>
	I0327 23:56:13.454498 1957481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 23:56:13.513354 1957481 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 23:56:13.504423113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 23:56:13.513526 1957481 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:56:13.513789 1957481 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 23:56:13.514007 1957481 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:56:13.515914 1957481 out.go:169] Using Docker driver with root privileges
	I0327 23:56:13.517630 1957481 cni.go:84] Creating CNI manager for ""
	I0327 23:56:13.517650 1957481 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0327 23:56:13.517666 1957481 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 23:56:13.517771 1957481 start.go:340] cluster config:
	{Name:download-only-984922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-984922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:56:13.519930 1957481 out.go:97] Starting "download-only-984922" primary control-plane node in "download-only-984922" cluster
	I0327 23:56:13.519961 1957481 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0327 23:56:13.521961 1957481 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 23:56:13.521993 1957481 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 23:56:13.522142 1957481 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 23:56:13.535216 1957481 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 23:56:13.535347 1957481 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 23:56:13.535370 1957481 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 23:56:13.535376 1957481 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 23:56:13.535386 1957481 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 23:56:13.625475 1957481 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I0327 23:56:13.625507 1957481 cache.go:56] Caching tarball of preloaded images
	I0327 23:56:13.625676 1957481 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 23:56:13.627672 1957481 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 23:56:13.627695 1957481 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	I0327 23:56:13.744586 1957481 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:f676343275e1172ac594af64d6d0592a -> /home/jenkins/minikube-integration/18158-1951721/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-984922 host does not exist
	  To start a cluster, run: "minikube start -p download-only-984922"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-984922
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-677516 --alsologtostderr --binary-mirror http://127.0.0.1:43099 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-677516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-677516
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.14s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-482679
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-482679: exit status 85 (136.893763ms)

                                                
                                                
-- stdout --
	* Profile "addons-482679" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-482679"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.14s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.12s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-482679
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-482679: exit status 85 (119.716942ms)

                                                
                                                
-- stdout --
	* Profile "addons-482679" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-482679"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.12s)

                                                
                                    
x
+
TestAddons/Setup (118.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-482679 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-482679 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (1m58.394835108s)
--- PASS: TestAddons/Setup (118.40s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 42.073221ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-c6js5" [f3baf2dc-389c-478c-8148-510e917e380b] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005484705s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mgwql" [5d95c02e-cd9e-4a2f-8b0b-0e6e7f131536] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004555039s
addons_test.go:340: (dbg) Run:  kubectl --context addons-482679 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-482679 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-482679 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.456483804s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 ip
2024/03/27 23:58:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fz7wb" [a591da3a-d3c1-4ed5-af28-41c174f4b373] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008616614s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-482679
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-482679: (5.759147195s)
--- PASS: TestAddons/parallel/InspektorGadget (10.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.356133ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-txgn5" [e0e41c2b-b28d-474c-81f6-204fae8b58f6] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00864265s
addons_test.go:415: (dbg) Run:  kubectl --context addons-482679 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 43.222333ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-482679 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-482679 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fa8eb217-8a54-473b-9a06-531bdd9ca99b] Pending
helpers_test.go:344: "task-pv-pod" [fa8eb217-8a54-473b-9a06-531bdd9ca99b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fa8eb217-8a54-473b-9a06-531bdd9ca99b] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003749892s
addons_test.go:584: (dbg) Run:  kubectl --context addons-482679 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-482679 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-482679 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-482679 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-482679 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-482679 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-482679 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b03bbd6b-82b1-4480-9a47-5439e50dfaff] Pending
helpers_test.go:344: "task-pv-pod-restore" [b03bbd6b-82b1-4480-9a47-5439e50dfaff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b03bbd6b-82b1-4480-9a47-5439e50dfaff] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.007196534s
addons_test.go:626: (dbg) Run:  kubectl --context addons-482679 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-482679 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-482679 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-482679 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.830728132s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-qpg92" [90db4a3b-381a-4a28-b557-2e8254c3395a] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012381269s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-482679
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-482679 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-482679 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-482679 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f2863a11-4b44-44a8-b262-9feb8839b92d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f2863a11-4b44-44a8-b262-9feb8839b92d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f2863a11-4b44-44a8-b262-9feb8839b92d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003785585s
addons_test.go:891: (dbg) Run:  kubectl --context addons-482679 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 ssh "cat /opt/local-path-provisioner/pvc-4cf0df16-62cf-4223-8c0a-bca573b6a479_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-482679 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-482679 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-482679 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-482679 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.335354152s)
--- PASS: TestAddons/parallel/LocalPath (51.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mrhg6" [aa83998b-3a9a-4746-abdb-f97f818000d6] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004727167s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-482679
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-8j2mj" [c6a3ef2b-3dee-4b95-ae41-900623264203] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004148787s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-482679 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-482679 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-482679
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-482679: (11.914480852s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-482679
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-482679
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-482679
--- PASS: TestAddons/StoppedEnableDisable (12.19s)

                                                
                                    
x
+
TestCertOptions (38.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-240788 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-240788 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.259683757s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-240788 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-240788 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-240788 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-240788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-240788
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-240788: (2.009623335s)
--- PASS: TestCertOptions (38.92s)

                                                
                                    
x
+
TestCertExpiration (228.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-658183 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-658183 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.911990064s)
E0328 00:39:58.427327 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-658183 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-658183 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.758886216s)
helpers_test.go:175: Cleaning up "cert-expiration-658183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-658183
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-658183: (2.278818841s)
--- PASS: TestCertExpiration (228.95s)

                                                
                                    
x
+
TestForceSystemdFlag (39.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-608052 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-608052 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.917477652s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-608052 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-608052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-608052
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-608052: (2.312128844s)
--- PASS: TestForceSystemdFlag (39.64s)

                                                
                                    
x
+
TestForceSystemdEnv (40.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-772681 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-772681 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.707782071s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-772681 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-772681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-772681
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-772681: (2.203623118s)
--- PASS: TestForceSystemdEnv (40.19s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.87s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-230101 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-230101 --driver=docker  --container-runtime=containerd: (28.345970827s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-230101"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-230101": (1.237348036s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PAsxnFpiXrSx/agent.1974287" SSH_AGENT_PID="1974288" DOCKER_HOST=ssh://docker@127.0.0.1:35044 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PAsxnFpiXrSx/agent.1974287" SSH_AGENT_PID="1974288" DOCKER_HOST=ssh://docker@127.0.0.1:35044 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PAsxnFpiXrSx/agent.1974287" SSH_AGENT_PID="1974288" DOCKER_HOST=ssh://docker@127.0.0.1:35044 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.355761908s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PAsxnFpiXrSx/agent.1974287" SSH_AGENT_PID="1974288" DOCKER_HOST=ssh://docker@127.0.0.1:35044 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PAsxnFpiXrSx/agent.1974287" SSH_AGENT_PID="1974288" DOCKER_HOST=ssh://docker@127.0.0.1:35044 docker image ls": (1.084951547s)
helpers_test.go:175: Cleaning up "dockerenv-230101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-230101
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-230101: (2.214950985s)
--- PASS: TestDockerEnvContainerd (44.87s)

                                                
                                    
x
+
TestErrorSpam/setup (32.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-148633 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-148633 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-148633 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-148633 --driver=docker  --container-runtime=containerd: (32.070183037s)
--- PASS: TestErrorSpam/setup (32.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 stop: (1.228837275s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-148633 --log_dir /tmp/nospam-148633 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18158-1951721/.minikube/files/etc/test/nested/copy/1957141/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-197628 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0328 00:03:23.483346 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:23.489224 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:23.499538 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:23.519882 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:23.560193 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:23.640458 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:23.800902 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:24.121366 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:24.762210 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:26.042477 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:28.602737 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:33.723518 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:03:43.964617 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-197628 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m19.475502939s)
--- PASS: TestFunctional/serial/StartWithProxy (79.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-197628 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-197628 --alsologtostderr -v=8: (5.56197859s)
functional_test.go:659: soft start took 5.563405239s for "functional-197628" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-197628 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 cache add registry.k8s.io/pause:3.1: (1.391205479s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 cache add registry.k8s.io/pause:3.3: (1.343198016s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 cache add registry.k8s.io/pause:latest: (1.540141474s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-197628 /tmp/TestFunctionalserialCacheCmdcacheadd_local2104679891/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cache add minikube-local-cache-test:functional-197628
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cache delete minikube-local-cache-test:functional-197628
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-197628
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.838571ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cache reload
E0328 00:04:04.445608 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 cache reload: (1.169309005s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 kubectl -- --context functional-197628 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-197628 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-197628 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0328 00:04:45.406439 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-197628 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.537128352s)
functional_test.go:757: restart took 43.537236019s for "functional-197628" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-197628 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 logs: (1.673412878s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 logs --file /tmp/TestFunctionalserialLogsFileCmd1174571932/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 logs --file /tmp/TestFunctionalserialLogsFileCmd1174571932/001/logs.txt: (1.688061816s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-197628 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-197628
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-197628: exit status 115 (376.958299ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30139 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-197628 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 config get cpus: exit status 14 (99.747102ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 config get cpus: exit status 14 (83.536295ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-197628 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-197628 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1988769: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-197628 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-197628 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (228.720293ms)

                                                
                                                
-- stdout --
	* [functional-197628] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:05:30.487262 1988488 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:05:30.487449 1988488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:05:30.487469 1988488 out.go:304] Setting ErrFile to fd 2...
	I0328 00:05:30.487487 1988488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:05:30.487742 1988488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:05:30.488120 1988488 out.go:298] Setting JSON to false
	I0328 00:05:30.489110 1988488 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28068,"bootTime":1711556262,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 00:05:30.489207 1988488 start.go:139] virtualization:  
	I0328 00:05:30.492729 1988488 out.go:177] * [functional-197628] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 00:05:30.494869 1988488 out.go:177]   - MINIKUBE_LOCATION=18158
	I0328 00:05:30.494926 1988488 notify.go:220] Checking for updates...
	I0328 00:05:30.499347 1988488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:05:30.501513 1988488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0328 00:05:30.503639 1988488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0328 00:05:30.505602 1988488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 00:05:30.508105 1988488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:05:30.510547 1988488 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:05:30.511095 1988488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:05:30.531372 1988488 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 00:05:30.531514 1988488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:05:30.626168 1988488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-28 00:05:30.614535159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:05:30.626285 1988488 docker.go:295] overlay module found
	I0328 00:05:30.628661 1988488 out.go:177] * Using the docker driver based on existing profile
	I0328 00:05:30.630332 1988488 start.go:297] selected driver: docker
	I0328 00:05:30.630363 1988488 start.go:901] validating driver "docker" against &{Name:functional-197628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-197628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:05:30.630461 1988488 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:05:30.632704 1988488 out.go:177] 
	W0328 00:05:30.634623 1988488 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0328 00:05:30.636173 1988488 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-197628 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-197628 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-197628 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (242.065198ms)

                                                
                                                
-- stdout --
	* [functional-197628] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:05:30.239836 1988370 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:05:30.240019 1988370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:05:30.240032 1988370 out.go:304] Setting ErrFile to fd 2...
	I0328 00:05:30.240038 1988370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:05:30.241081 1988370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:05:30.241592 1988370 out.go:298] Setting JSON to false
	I0328 00:05:30.242694 1988370 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28068,"bootTime":1711556262,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 00:05:30.242770 1988370 start.go:139] virtualization:  
	I0328 00:05:30.247631 1988370 out.go:177] * [functional-197628] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0328 00:05:30.250146 1988370 out.go:177]   - MINIKUBE_LOCATION=18158
	I0328 00:05:30.252233 1988370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:05:30.250245 1988370 notify.go:220] Checking for updates...
	I0328 00:05:30.257088 1988370 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0328 00:05:30.259254 1988370 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0328 00:05:30.261407 1988370 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 00:05:30.264061 1988370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:05:30.266619 1988370 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:05:30.267252 1988370 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:05:30.293304 1988370 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 00:05:30.293487 1988370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:05:30.393543 1988370 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-28 00:05:30.383320475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:05:30.393656 1988370 docker.go:295] overlay module found
	I0328 00:05:30.396723 1988370 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0328 00:05:30.398446 1988370 start.go:297] selected driver: docker
	I0328 00:05:30.398464 1988370 start.go:901] validating driver "docker" against &{Name:functional-197628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-197628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:05:30.398584 1988370 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:05:30.401076 1988370 out.go:177] 
	W0328 00:05:30.403154 1988370 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0328 00:05:30.407019 1988370 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-197628 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-197628 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-9rnv9" [136028cf-e6ce-4ccc-8e18-f164c89fef69] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-9rnv9" [136028cf-e6ce-4ccc-8e18-f164c89fef69] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004531902s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32602
functional_test.go:1671: http://192.168.49.2:32602: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-9rnv9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32602
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f1dc0991-4f05-4eb7-aa2c-3929768f4124] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004592657s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-197628 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-197628 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-197628 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-197628 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-197628 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [464235b6-8e65-41d3-8342-e7026545f1db] Pending
helpers_test.go:344: "sp-pod" [464235b6-8e65-41d3-8342-e7026545f1db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [464235b6-8e65-41d3-8342-e7026545f1db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003658809s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-197628 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-197628 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-197628 delete -f testdata/storage-provisioner/pod.yaml: (1.913192755s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-197628 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [34bd9704-642b-4d8a-a100-292ad21a2bda] Pending
helpers_test.go:344: "sp-pod" [34bd9704-642b-4d8a-a100-292ad21a2bda] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004494425s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-197628 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh -n functional-197628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cp functional-197628:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1177218438/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh -n functional-197628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh -n functional-197628 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1957141/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo cat /etc/test/nested/copy/1957141/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1957141.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo cat /etc/ssl/certs/1957141.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1957141.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo cat /usr/share/ca-certificates/1957141.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/19571412.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo cat /etc/ssl/certs/19571412.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/19571412.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo cat /usr/share/ca-certificates/19571412.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-197628 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 ssh "sudo systemctl is-active docker": exit status 1 (298.950822ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 ssh "sudo systemctl is-active crio": exit status 1 (331.082425ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-197628 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-197628 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-197628 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-197628 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1986262: os: process already finished
helpers_test.go:508: unable to kill pid 1986111: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-197628 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-197628 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9a16511f-0943-4e0d-9330-fbfd37a87229] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9a16511f-0943-4e0d-9330-fbfd37a87229] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003724625s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-197628 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.228.158 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-197628 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-197628 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-197628 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-t8dpd" [89e044c5-a36b-4446-9528-901926c0f35c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-t8dpd" [89e044c5-a36b-4446-9528-901926c0f35c] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004091208s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "394.098527ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "78.765488ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "414.506345ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "89.878646ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 service list -o json
functional_test.go:1490: Took "617.235529ms" to run "out/minikube-linux-arm64 -p functional-197628 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdany-port282065787/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711584327261237117" to /tmp/TestFunctionalparallelMountCmdany-port282065787/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711584327261237117" to /tmp/TestFunctionalparallelMountCmdany-port282065787/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711584327261237117" to /tmp/TestFunctionalparallelMountCmdany-port282065787/001/test-1711584327261237117
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (416.086355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 28 00:05 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 28 00:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 28 00:05 test-1711584327261237117
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh cat /mount-9p/test-1711584327261237117
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-197628 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9ea330f1-8789-4d22-8a3d-00e09f32ef17] Pending
helpers_test.go:344: "busybox-mount" [9ea330f1-8789-4d22-8a3d-00e09f32ef17] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9ea330f1-8789-4d22-8a3d-00e09f32ef17] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9ea330f1-8789-4d22-8a3d-00e09f32ef17] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004150156s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-197628 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdany-port282065787/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31678
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31678
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdspecific-port4189469919/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.637226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdspecific-port4189469919/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 ssh "sudo umount -f /mount-9p": exit status 1 (353.190276ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-197628 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdspecific-port4189469919/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4190714737/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4190714737/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4190714737/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T" /mount1: exit status 1 (1.115189695s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh "findmnt -T" /mount3
2024/03/28 00:05:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-197628 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4190714737/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4190714737/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-197628 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4190714737/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 version -o=json --components: (1.314904986s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-197628 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-197628
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-197628 image ls --format short --alsologtostderr:
I0328 00:05:56.736613 1990973 out.go:291] Setting OutFile to fd 1 ...
I0328 00:05:56.736832 1990973 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:56.736844 1990973 out.go:304] Setting ErrFile to fd 2...
I0328 00:05:56.736851 1990973 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:56.737128 1990973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
I0328 00:05:56.737847 1990973 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:56.738011 1990973 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:56.738537 1990973 cli_runner.go:164] Run: docker container inspect functional-197628 --format={{.State.Status}}
I0328 00:05:56.753838 1990973 ssh_runner.go:195] Run: systemctl --version
I0328 00:05:56.753898 1990973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-197628
I0328 00:05:56.767404 1990973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35054 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/functional-197628/id_rsa Username:docker}
I0328 00:05:56.858242 1990973 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-197628 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-197628  | sha256:6ec729 | 990B   |
| docker.io/library/nginx                     | alpine             | sha256:b8c826 | 17.6MB |
| docker.io/library/nginx                     | latest             | sha256:070027 | 67.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:121d70 | 30.6MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:0e9b4a | 25MB   |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:4b51f9 | 16.9MB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:258111 | 32.1MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-197628 image ls --format table --alsologtostderr:
I0328 00:05:57.409039 1991114 out.go:291] Setting OutFile to fd 1 ...
I0328 00:05:57.409247 1991114 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:57.409274 1991114 out.go:304] Setting ErrFile to fd 2...
I0328 00:05:57.409294 1991114 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:57.409746 1991114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
I0328 00:05:57.410506 1991114 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:57.410690 1991114 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:57.411281 1991114 cli_runner.go:164] Run: docker container inspect functional-197628 --format={{.State.Status}}
I0328 00:05:57.432188 1991114 ssh_runner.go:195] Run: systemctl --version
I0328 00:05:57.432245 1991114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-197628
I0328 00:05:57.454896 1991114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35054 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/functional-197628/id_rsa Username:docker}
I0328 00:05:57.545149 1991114 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-197628 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"30578527"},{"id":"sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags"
:["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"16931371"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216851"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2
e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:6ec7290a1efbae78bc670dd78f3847e1a0c01fc174cd39ea37a4bd183c565611","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-197628"],"size":"990"},{"id":"sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601398"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"32143347"},{"id":"sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"25039677"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size
":"25336339"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-197628 image ls --format json --alsologtostderr:
I0328 00:05:57.095594 1991035 out.go:291] Setting OutFile to fd 1 ...
I0328 00:05:57.095820 1991035 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:57.095848 1991035 out.go:304] Setting ErrFile to fd 2...
I0328 00:05:57.095870 1991035 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:57.096396 1991035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
I0328 00:05:57.097232 1991035 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:57.097415 1991035 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:57.098039 1991035 cli_runner.go:164] Run: docker container inspect functional-197628 --format={{.State.Status}}
I0328 00:05:57.140558 1991035 ssh_runner.go:195] Run: systemctl --version
I0328 00:05:57.140612 1991035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-197628
I0328 00:05:57.166391 1991035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35054 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/functional-197628/id_rsa Username:docker}
I0328 00:05:57.255468 1991035 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-197628 image ls --format yaml --alsologtostderr:
- id: sha256:6ec7290a1efbae78bc670dd78f3847e1a0c01fc174cd39ea37a4bd183c565611
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-197628
size: "990"
- id: sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "67216851"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "30578527"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "16931371"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
repoTags:
- docker.io/library/nginx:alpine
size: "17601398"
- id: sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "25039677"
- id: sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "32143347"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-197628 image ls --format yaml --alsologtostderr:
I0328 00:05:56.747254 1990974 out.go:291] Setting OutFile to fd 1 ...
I0328 00:05:56.747516 1990974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:56.747548 1990974 out.go:304] Setting ErrFile to fd 2...
I0328 00:05:56.747572 1990974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:56.747838 1990974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
I0328 00:05:56.749265 1990974 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:56.749501 1990974 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:56.750033 1990974 cli_runner.go:164] Run: docker container inspect functional-197628 --format={{.State.Status}}
I0328 00:05:56.770425 1990974 ssh_runner.go:195] Run: systemctl --version
I0328 00:05:56.770481 1990974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-197628
I0328 00:05:56.791351 1990974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35054 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/functional-197628/id_rsa Username:docker}
I0328 00:05:56.883890 1990974 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-197628 ssh pgrep buildkitd: exit status 1 (333.906679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image build -t localhost/my-image:functional-197628 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-197628 image build -t localhost/my-image:functional-197628 testdata/build --alsologtostderr: (2.210773881s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-197628 image build -t localhost/my-image:functional-197628 testdata/build --alsologtostderr:
I0328 00:05:57.355330 1991104 out.go:291] Setting OutFile to fd 1 ...
I0328 00:05:57.356564 1991104 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:57.356578 1991104 out.go:304] Setting ErrFile to fd 2...
I0328 00:05:57.356583 1991104 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 00:05:57.356867 1991104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
I0328 00:05:57.357560 1991104 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:57.358217 1991104 config.go:182] Loaded profile config "functional-197628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 00:05:57.358693 1991104 cli_runner.go:164] Run: docker container inspect functional-197628 --format={{.State.Status}}
I0328 00:05:57.387087 1991104 ssh_runner.go:195] Run: systemctl --version
I0328 00:05:57.387146 1991104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-197628
I0328 00:05:57.404330 1991104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35054 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/functional-197628/id_rsa Username:docker}
I0328 00:05:57.494822 1991104 build_images.go:161] Building image from path: /tmp/build.271676481.tar
I0328 00:05:57.494890 1991104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0328 00:05:57.504464 1991104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.271676481.tar
I0328 00:05:57.507887 1991104 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.271676481.tar: stat -c "%s %y" /var/lib/minikube/build/build.271676481.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.271676481.tar': No such file or directory
I0328 00:05:57.507918 1991104 ssh_runner.go:362] scp /tmp/build.271676481.tar --> /var/lib/minikube/build/build.271676481.tar (3072 bytes)
I0328 00:05:57.531676 1991104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.271676481
I0328 00:05:57.540555 1991104 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.271676481 -xf /var/lib/minikube/build/build.271676481.tar
I0328 00:05:57.550163 1991104 containerd.go:394] Building image: /var/lib/minikube/build/build.271676481
I0328 00:05:57.550241 1991104 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.271676481 --local dockerfile=/var/lib/minikube/build/build.271676481 --output type=image,name=localhost/my-image:functional-197628
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:11cff5c9edebe007ffa5957116770f86fc6ccf914a909a91e5c07b91afe27e2d 0.0s done
#8 exporting config sha256:7f6b118e91335d131138b1b5093e39e24b6e0e947a76dd0ea5948653328b8cde 0.0s done
#8 naming to localhost/my-image:functional-197628 done
#8 DONE 0.1s
I0328 00:05:59.437059 1991104 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.271676481 --local dockerfile=/var/lib/minikube/build/build.271676481 --output type=image,name=localhost/my-image:functional-197628: (1.886786924s)
I0328 00:05:59.437141 1991104 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.271676481
I0328 00:05:59.447992 1991104 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.271676481.tar
I0328 00:05:59.462649 1991104 build_images.go:217] Built localhost/my-image:functional-197628 from /tmp/build.271676481.tar
I0328 00:05:59.462680 1991104 build_images.go:133] succeeded building to: functional-197628
I0328 00:05:59.462695 1991104 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.766881895s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-197628
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image rm gcr.io/google-containers/addon-resizer:functional-197628 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-197628
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-197628 image save --daemon gcr.io/google-containers/addon-resizer:functional-197628 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-197628
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-197628
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-197628
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-197628
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (125.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-788039 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0328 00:06:07.327137 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-788039 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m4.730484242s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (125.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (20.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- rollout status deployment/busybox
E0328 00:08:23.482046 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-788039 -- rollout status deployment/busybox: (17.341385716s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-cg4kj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-kfh58 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-lkt98 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-cg4kj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-kfh58 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-lkt98 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-cg4kj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-kfh58 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-lkt98 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (20.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-cg4kj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-cg4kj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-kfh58 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-kfh58 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-lkt98 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-788039 -- exec busybox-7fdf7869d9-lkt98 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-788039 -v=7 --alsologtostderr
E0328 00:08:51.167693 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-788039 -v=7 --alsologtostderr: (22.253889576s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-788039 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp testdata/cp-test.txt ha-788039:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2829026039/001/cp-test_ha-788039.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039:/home/docker/cp-test.txt ha-788039-m02:/home/docker/cp-test_ha-788039_ha-788039-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m02 "sudo cat /home/docker/cp-test_ha-788039_ha-788039-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039:/home/docker/cp-test.txt ha-788039-m03:/home/docker/cp-test_ha-788039_ha-788039-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m03 "sudo cat /home/docker/cp-test_ha-788039_ha-788039-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039:/home/docker/cp-test.txt ha-788039-m04:/home/docker/cp-test_ha-788039_ha-788039-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m04 "sudo cat /home/docker/cp-test_ha-788039_ha-788039-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp testdata/cp-test.txt ha-788039-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2829026039/001/cp-test_ha-788039-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m02:/home/docker/cp-test.txt ha-788039:/home/docker/cp-test_ha-788039-m02_ha-788039.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039 "sudo cat /home/docker/cp-test_ha-788039-m02_ha-788039.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m02:/home/docker/cp-test.txt ha-788039-m03:/home/docker/cp-test_ha-788039-m02_ha-788039-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m03 "sudo cat /home/docker/cp-test_ha-788039-m02_ha-788039-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m02:/home/docker/cp-test.txt ha-788039-m04:/home/docker/cp-test_ha-788039-m02_ha-788039-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m04 "sudo cat /home/docker/cp-test_ha-788039-m02_ha-788039-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp testdata/cp-test.txt ha-788039-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2829026039/001/cp-test_ha-788039-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m03:/home/docker/cp-test.txt ha-788039:/home/docker/cp-test_ha-788039-m03_ha-788039.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039 "sudo cat /home/docker/cp-test_ha-788039-m03_ha-788039.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m03:/home/docker/cp-test.txt ha-788039-m02:/home/docker/cp-test_ha-788039-m03_ha-788039-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m02 "sudo cat /home/docker/cp-test_ha-788039-m03_ha-788039-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m03:/home/docker/cp-test.txt ha-788039-m04:/home/docker/cp-test_ha-788039-m03_ha-788039-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m04 "sudo cat /home/docker/cp-test_ha-788039-m03_ha-788039-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp testdata/cp-test.txt ha-788039-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2829026039/001/cp-test_ha-788039-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m04:/home/docker/cp-test.txt ha-788039:/home/docker/cp-test_ha-788039-m04_ha-788039.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039 "sudo cat /home/docker/cp-test_ha-788039-m04_ha-788039.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m04:/home/docker/cp-test.txt ha-788039-m02:/home/docker/cp-test_ha-788039-m04_ha-788039-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m02 "sudo cat /home/docker/cp-test_ha-788039-m04_ha-788039-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 cp ha-788039-m04:/home/docker/cp-test.txt ha-788039-m03:/home/docker/cp-test_ha-788039-m04_ha-788039-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 ssh -n ha-788039-m03 "sudo cat /home/docker/cp-test_ha-788039-m04_ha-788039-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-788039 node stop m02 -v=7 --alsologtostderr: (12.133298468s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr: exit status 7 (738.671114ms)

                                                
                                                
-- stdout --
	ha-788039
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-788039-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-788039-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-788039-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:09:26.133735 2006428 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:09:26.137042 2006428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:09:26.137101 2006428 out.go:304] Setting ErrFile to fd 2...
	I0328 00:09:26.137162 2006428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:09:26.137506 2006428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:09:26.137787 2006428 out.go:298] Setting JSON to false
	I0328 00:09:26.137856 2006428 mustload.go:65] Loading cluster: ha-788039
	I0328 00:09:26.138116 2006428 notify.go:220] Checking for updates...
	I0328 00:09:26.138650 2006428 config.go:182] Loaded profile config "ha-788039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:09:26.138709 2006428 status.go:255] checking status of ha-788039 ...
	I0328 00:09:26.139342 2006428 cli_runner.go:164] Run: docker container inspect ha-788039 --format={{.State.Status}}
	I0328 00:09:26.158613 2006428 status.go:330] ha-788039 host status = "Running" (err=<nil>)
	I0328 00:09:26.158637 2006428 host.go:66] Checking if "ha-788039" exists ...
	I0328 00:09:26.158943 2006428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-788039
	I0328 00:09:26.176288 2006428 host.go:66] Checking if "ha-788039" exists ...
	I0328 00:09:26.176608 2006428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:09:26.176675 2006428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-788039
	I0328 00:09:26.203940 2006428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35059 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/ha-788039/id_rsa Username:docker}
	I0328 00:09:26.291570 2006428 ssh_runner.go:195] Run: systemctl --version
	I0328 00:09:26.295885 2006428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:09:26.308614 2006428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:09:26.393837 2006428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-03-28 00:09:26.384019349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:09:26.394504 2006428 kubeconfig.go:125] found "ha-788039" server: "https://192.168.49.254:8443"
	I0328 00:09:26.394529 2006428 api_server.go:166] Checking apiserver status ...
	I0328 00:09:26.394572 2006428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:09:26.406102 2006428 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	I0328 00:09:26.416113 2006428 api_server.go:182] apiserver freezer: "12:freezer:/docker/c5c7dbfc8f512d1d017b3713a77a81e7d6efc157ff284a001ea45482fb590fd1/kubepods/burstable/podbd9cbe2170b953cc92270cb585916928/d401ce5cb3f6819d6bb71cd383aa54b759d2bc03c44e8c9d9a0e9e5137b57908"
	I0328 00:09:26.416196 2006428 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c5c7dbfc8f512d1d017b3713a77a81e7d6efc157ff284a001ea45482fb590fd1/kubepods/burstable/podbd9cbe2170b953cc92270cb585916928/d401ce5cb3f6819d6bb71cd383aa54b759d2bc03c44e8c9d9a0e9e5137b57908/freezer.state
	I0328 00:09:26.426427 2006428 api_server.go:204] freezer state: "THAWED"
	I0328 00:09:26.426458 2006428 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0328 00:09:26.434551 2006428 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0328 00:09:26.434587 2006428 status.go:422] ha-788039 apiserver status = Running (err=<nil>)
	I0328 00:09:26.434601 2006428 status.go:257] ha-788039 status: &{Name:ha-788039 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:09:26.434620 2006428 status.go:255] checking status of ha-788039-m02 ...
	I0328 00:09:26.434935 2006428 cli_runner.go:164] Run: docker container inspect ha-788039-m02 --format={{.State.Status}}
	I0328 00:09:26.451650 2006428 status.go:330] ha-788039-m02 host status = "Stopped" (err=<nil>)
	I0328 00:09:26.451676 2006428 status.go:343] host is not running, skipping remaining checks
	I0328 00:09:26.451683 2006428 status.go:257] ha-788039-m02 status: &{Name:ha-788039-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:09:26.451704 2006428 status.go:255] checking status of ha-788039-m03 ...
	I0328 00:09:26.452117 2006428 cli_runner.go:164] Run: docker container inspect ha-788039-m03 --format={{.State.Status}}
	I0328 00:09:26.470185 2006428 status.go:330] ha-788039-m03 host status = "Running" (err=<nil>)
	I0328 00:09:26.470211 2006428 host.go:66] Checking if "ha-788039-m03" exists ...
	I0328 00:09:26.470616 2006428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-788039-m03
	I0328 00:09:26.485533 2006428 host.go:66] Checking if "ha-788039-m03" exists ...
	I0328 00:09:26.485875 2006428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:09:26.485991 2006428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-788039-m03
	I0328 00:09:26.504024 2006428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35069 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/ha-788039-m03/id_rsa Username:docker}
	I0328 00:09:26.598079 2006428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:09:26.611091 2006428 kubeconfig.go:125] found "ha-788039" server: "https://192.168.49.254:8443"
	I0328 00:09:26.611119 2006428 api_server.go:166] Checking apiserver status ...
	I0328 00:09:26.611196 2006428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:09:26.626417 2006428 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1336/cgroup
	I0328 00:09:26.637170 2006428 api_server.go:182] apiserver freezer: "12:freezer:/docker/99e82b80ba87e69ae1ff478184a6285f3901a333e5cf7b9699b9d1cb801ff190/kubepods/burstable/pod441844fc0fd25b99ac7e75d84d7c28b2/7232d9bdafc10b9ff9352891cfe57084290267043929f388df8784a4cab72229"
	I0328 00:09:26.637242 2006428 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/99e82b80ba87e69ae1ff478184a6285f3901a333e5cf7b9699b9d1cb801ff190/kubepods/burstable/pod441844fc0fd25b99ac7e75d84d7c28b2/7232d9bdafc10b9ff9352891cfe57084290267043929f388df8784a4cab72229/freezer.state
	I0328 00:09:26.646046 2006428 api_server.go:204] freezer state: "THAWED"
	I0328 00:09:26.646074 2006428 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0328 00:09:26.655532 2006428 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0328 00:09:26.655566 2006428 status.go:422] ha-788039-m03 apiserver status = Running (err=<nil>)
	I0328 00:09:26.655577 2006428 status.go:257] ha-788039-m03 status: &{Name:ha-788039-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:09:26.655593 2006428 status.go:255] checking status of ha-788039-m04 ...
	I0328 00:09:26.655912 2006428 cli_runner.go:164] Run: docker container inspect ha-788039-m04 --format={{.State.Status}}
	I0328 00:09:26.671465 2006428 status.go:330] ha-788039-m04 host status = "Running" (err=<nil>)
	I0328 00:09:26.671487 2006428 host.go:66] Checking if "ha-788039-m04" exists ...
	I0328 00:09:26.671821 2006428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-788039-m04
	I0328 00:09:26.694267 2006428 host.go:66] Checking if "ha-788039-m04" exists ...
	I0328 00:09:26.694557 2006428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:09:26.694639 2006428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-788039-m04
	I0328 00:09:26.709683 2006428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35074 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/ha-788039-m04/id_rsa Username:docker}
	I0328 00:09:26.795015 2006428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:09:26.806661 2006428 status.go:257] ha-788039-m04 status: &{Name:ha-788039-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-788039 node start m02 -v=7 --alsologtostderr: (29.386750063s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0328 00:09:58.425767 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:09:58.430995 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:09:58.441216 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:09:58.461578 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:09:58.502348 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-788039 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-788039 -v=7 --alsologtostderr
E0328 00:09:58.583113 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:09:58.743674 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:09:59.064194 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:09:59.704474 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:10:00.985348 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:10:03.546451 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:10:08.667404 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:10:18.908591 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-788039 -v=7 --alsologtostderr: (37.057033789s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-788039 --wait=true -v=7 --alsologtostderr
E0328 00:10:39.389282 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:11:20.350252 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-788039 --wait=true -v=7 --alsologtostderr: (1m23.439748466s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-788039
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-788039 node delete m03 -v=7 --alsologtostderr: (9.329591023s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 stop -v=7 --alsologtostderr
E0328 00:12:42.270592 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-788039 stop -v=7 --alsologtostderr: (35.85210235s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr: exit status 7 (104.918632ms)

                                                
                                                
-- stdout --
	ha-788039
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-788039-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-788039-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:12:45.843488 2020136 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:12:45.843667 2020136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:12:45.843681 2020136 out.go:304] Setting ErrFile to fd 2...
	I0328 00:12:45.843687 2020136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:12:45.843966 2020136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:12:45.844196 2020136 out.go:298] Setting JSON to false
	I0328 00:12:45.844244 2020136 mustload.go:65] Loading cluster: ha-788039
	I0328 00:12:45.844294 2020136 notify.go:220] Checking for updates...
	I0328 00:12:45.844682 2020136 config.go:182] Loaded profile config "ha-788039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:12:45.844695 2020136 status.go:255] checking status of ha-788039 ...
	I0328 00:12:45.845214 2020136 cli_runner.go:164] Run: docker container inspect ha-788039 --format={{.State.Status}}
	I0328 00:12:45.862262 2020136 status.go:330] ha-788039 host status = "Stopped" (err=<nil>)
	I0328 00:12:45.862288 2020136 status.go:343] host is not running, skipping remaining checks
	I0328 00:12:45.862296 2020136 status.go:257] ha-788039 status: &{Name:ha-788039 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:12:45.862343 2020136 status.go:255] checking status of ha-788039-m02 ...
	I0328 00:12:45.862684 2020136 cli_runner.go:164] Run: docker container inspect ha-788039-m02 --format={{.State.Status}}
	I0328 00:12:45.879004 2020136 status.go:330] ha-788039-m02 host status = "Stopped" (err=<nil>)
	I0328 00:12:45.879027 2020136 status.go:343] host is not running, skipping remaining checks
	I0328 00:12:45.879035 2020136 status.go:257] ha-788039-m02 status: &{Name:ha-788039-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:12:45.879069 2020136 status.go:255] checking status of ha-788039-m04 ...
	I0328 00:12:45.879357 2020136 cli_runner.go:164] Run: docker container inspect ha-788039-m04 --format={{.State.Status}}
	I0328 00:12:45.893243 2020136 status.go:330] ha-788039-m04 host status = "Stopped" (err=<nil>)
	I0328 00:12:45.893268 2020136 status.go:343] host is not running, skipping remaining checks
	I0328 00:12:45.893276 2020136 status.go:257] ha-788039-m04 status: &{Name:ha-788039-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-788039 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0328 00:13:23.479078 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-788039 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m21.185575667s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-788039 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-788039 --control-plane -v=7 --alsologtostderr: (43.405039934s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-788039 status -v=7 --alsologtostderr: (1.026509623s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-744723 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0328 00:15:26.110960 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-744723 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m28.631201904s)
--- PASS: TestJSONOutput/start/Command (88.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-744723 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-744723 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-744723 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-744723 --output=json --user=testUser: (5.744646687s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-147927 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-147927 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.111081ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7105f2f3-3a6b-4a5d-9703-3472b1ba53bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-147927] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"24d3d489-343b-49e0-882c-12591beab460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18158"}}
	{"specversion":"1.0","id":"043e183a-4a34-4be7-bf88-83bdfa8806e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"731cb9d9-8790-4e3c-9407-4204c99f44d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig"}}
	{"specversion":"1.0","id":"74cac234-d325-4c6e-870c-5cfa3f875d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube"}}
	{"specversion":"1.0","id":"35ea81be-c44d-421c-8c7b-4c13b91634f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ac5acd44-d4cd-4a4f-b988-a35f786ef0de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"51650578-84d9-49f8-b87c-b24a218a436f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-147927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-147927
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-589436 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-589436 --network=: (40.321460138s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-589436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-589436
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-589436: (2.012407113s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.36s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-957202 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-957202 --network=bridge: (33.624754805s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-957202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-957202
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-957202: (2.206805337s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.85s)

                                                
                                    
x
+
TestKicExistingNetwork (34.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-061559 --network=existing-network
E0328 00:18:23.478688 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-061559 --network=existing-network: (32.552008444s)
helpers_test.go:175: Cleaning up "existing-network-061559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-061559
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-061559: (1.991647637s)
--- PASS: TestKicExistingNetwork (34.68s)

                                                
                                    
x
+
TestKicCustomSubnet (35.98s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-070940 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-070940 --subnet=192.168.60.0/24: (33.927408201s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-070940 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-070940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-070940
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-070940: (2.037415897s)
--- PASS: TestKicCustomSubnet (35.98s)

                                                
                                    
x
+
TestKicStaticIP (34.92s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-015126 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-015126 --static-ip=192.168.200.200: (33.07614228s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-015126 ip
helpers_test.go:175: Cleaning up "static-ip-015126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-015126
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-015126: (1.709110276s)
--- PASS: TestKicStaticIP (34.92s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-119201 --driver=docker  --container-runtime=containerd
E0328 00:19:46.528244 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:19:58.426084 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-119201 --driver=docker  --container-runtime=containerd: (28.679498106s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-121991 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-121991 --driver=docker  --container-runtime=containerd: (34.257707065s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-119201
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-121991
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-121991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-121991
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-121991: (1.918297017s)
helpers_test.go:175: Cleaning up "first-119201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-119201
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-119201: (2.1888026s)
--- PASS: TestMinikubeProfile (68.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-156592 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-156592 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.337189034s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-156592 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-169720 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-169720 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.798330473s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-169720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-156592 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-156592 --alsologtostderr -v=5: (1.576721995s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-169720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-169720
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-169720: (1.202181929s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-169720
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-169720: (6.33910467s)
--- PASS: TestMountStart/serial/RestartStopped (7.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-169720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-893306 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-893306 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.186963s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (32.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-893306 -- rollout status deployment/busybox: (30.258544909s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-qs85h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-vh9p6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-qs85h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-vh9p6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-qs85h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-vh9p6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (32.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-qs85h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-qs85h -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-vh9p6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-893306 -- exec busybox-7fdf7869d9-vh9p6 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-893306 -v 3 --alsologtostderr
E0328 00:23:23.479299 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-893306 -v 3 --alsologtostderr: (15.848377225s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.49s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-893306 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp testdata/cp-test.txt multinode-893306:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3283686465/001/cp-test_multinode-893306.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306:/home/docker/cp-test.txt multinode-893306-m02:/home/docker/cp-test_multinode-893306_multinode-893306-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m02 "sudo cat /home/docker/cp-test_multinode-893306_multinode-893306-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306:/home/docker/cp-test.txt multinode-893306-m03:/home/docker/cp-test_multinode-893306_multinode-893306-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m03 "sudo cat /home/docker/cp-test_multinode-893306_multinode-893306-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp testdata/cp-test.txt multinode-893306-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3283686465/001/cp-test_multinode-893306-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306-m02:/home/docker/cp-test.txt multinode-893306:/home/docker/cp-test_multinode-893306-m02_multinode-893306.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306 "sudo cat /home/docker/cp-test_multinode-893306-m02_multinode-893306.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306-m02:/home/docker/cp-test.txt multinode-893306-m03:/home/docker/cp-test_multinode-893306-m02_multinode-893306-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m03 "sudo cat /home/docker/cp-test_multinode-893306-m02_multinode-893306-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp testdata/cp-test.txt multinode-893306-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3283686465/001/cp-test_multinode-893306-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306-m03:/home/docker/cp-test.txt multinode-893306:/home/docker/cp-test_multinode-893306-m03_multinode-893306.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306 "sudo cat /home/docker/cp-test_multinode-893306-m03_multinode-893306.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 cp multinode-893306-m03:/home/docker/cp-test.txt multinode-893306-m02:/home/docker/cp-test_multinode-893306-m03_multinode-893306-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 ssh -n multinode-893306-m02 "sudo cat /home/docker/cp-test_multinode-893306-m03_multinode-893306-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-893306 node stop m03: (1.21841479s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-893306 status: exit status 7 (493.42927ms)

                                                
                                                
-- stdout --
	multinode-893306
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-893306-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-893306-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-893306 status --alsologtostderr: exit status 7 (506.110533ms)

                                                
                                                
-- stdout --
	multinode-893306
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-893306-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-893306-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:23:37.288877 2072140 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:23:37.289031 2072140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:23:37.289039 2072140 out.go:304] Setting ErrFile to fd 2...
	I0328 00:23:37.289045 2072140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:23:37.289309 2072140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:23:37.289495 2072140 out.go:298] Setting JSON to false
	I0328 00:23:37.289530 2072140 mustload.go:65] Loading cluster: multinode-893306
	I0328 00:23:37.289574 2072140 notify.go:220] Checking for updates...
	I0328 00:23:37.289971 2072140 config.go:182] Loaded profile config "multinode-893306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:23:37.289989 2072140 status.go:255] checking status of multinode-893306 ...
	I0328 00:23:37.290754 2072140 cli_runner.go:164] Run: docker container inspect multinode-893306 --format={{.State.Status}}
	I0328 00:23:37.308100 2072140 status.go:330] multinode-893306 host status = "Running" (err=<nil>)
	I0328 00:23:37.308128 2072140 host.go:66] Checking if "multinode-893306" exists ...
	I0328 00:23:37.308433 2072140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-893306
	I0328 00:23:37.324583 2072140 host.go:66] Checking if "multinode-893306" exists ...
	I0328 00:23:37.324907 2072140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:23:37.324960 2072140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-893306
	I0328 00:23:37.350159 2072140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35179 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/multinode-893306/id_rsa Username:docker}
	I0328 00:23:37.439091 2072140 ssh_runner.go:195] Run: systemctl --version
	I0328 00:23:37.443144 2072140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:23:37.457276 2072140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:23:37.518159 2072140 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-03-28 00:23:37.507996605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:23:37.518774 2072140 kubeconfig.go:125] found "multinode-893306" server: "https://192.168.58.2:8443"
	I0328 00:23:37.518802 2072140 api_server.go:166] Checking apiserver status ...
	I0328 00:23:37.518846 2072140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:23:37.530274 2072140 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	I0328 00:23:37.540329 2072140 api_server.go:182] apiserver freezer: "12:freezer:/docker/f27f1beae58d2727ff1308d927f42b474a5e0f01c2874409292e4f398cad940b/kubepods/burstable/pod42d225f665e9d8f1aa589886527d3487/374d35cef714a7f26af301ee7876f8a4367f8c175d386ab73941d45f92c67660"
	I0328 00:23:37.540404 2072140 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f27f1beae58d2727ff1308d927f42b474a5e0f01c2874409292e4f398cad940b/kubepods/burstable/pod42d225f665e9d8f1aa589886527d3487/374d35cef714a7f26af301ee7876f8a4367f8c175d386ab73941d45f92c67660/freezer.state
	I0328 00:23:37.548813 2072140 api_server.go:204] freezer state: "THAWED"
	I0328 00:23:37.548855 2072140 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0328 00:23:37.557022 2072140 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0328 00:23:37.557053 2072140 status.go:422] multinode-893306 apiserver status = Running (err=<nil>)
	I0328 00:23:37.557065 2072140 status.go:257] multinode-893306 status: &{Name:multinode-893306 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:23:37.557081 2072140 status.go:255] checking status of multinode-893306-m02 ...
	I0328 00:23:37.557403 2072140 cli_runner.go:164] Run: docker container inspect multinode-893306-m02 --format={{.State.Status}}
	I0328 00:23:37.576590 2072140 status.go:330] multinode-893306-m02 host status = "Running" (err=<nil>)
	I0328 00:23:37.576619 2072140 host.go:66] Checking if "multinode-893306-m02" exists ...
	I0328 00:23:37.576922 2072140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-893306-m02
	I0328 00:23:37.592361 2072140 host.go:66] Checking if "multinode-893306-m02" exists ...
	I0328 00:23:37.592652 2072140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:23:37.592705 2072140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-893306-m02
	I0328 00:23:37.608494 2072140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35184 SSHKeyPath:/home/jenkins/minikube-integration/18158-1951721/.minikube/machines/multinode-893306-m02/id_rsa Username:docker}
	I0328 00:23:37.695000 2072140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:23:37.706882 2072140 status.go:257] multinode-893306-m02 status: &{Name:multinode-893306-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:23:37.706917 2072140 status.go:255] checking status of multinode-893306-m03 ...
	I0328 00:23:37.707237 2072140 cli_runner.go:164] Run: docker container inspect multinode-893306-m03 --format={{.State.Status}}
	I0328 00:23:37.722882 2072140 status.go:330] multinode-893306-m03 host status = "Stopped" (err=<nil>)
	I0328 00:23:37.722907 2072140 status.go:343] host is not running, skipping remaining checks
	I0328 00:23:37.722915 2072140 status.go:257] multinode-893306-m03 status: &{Name:multinode-893306-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-893306 node start m03 -v=7 --alsologtostderr: (8.34623049s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-893306
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-893306
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-893306: (25.30377968s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-893306 --wait=true -v=8 --alsologtostderr
E0328 00:24:58.424875 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-893306 --wait=true -v=8 --alsologtostderr: (55.104610937s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-893306
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-893306 node delete m03: (4.649710874s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-893306 stop: (23.791328973s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-893306 status: exit status 7 (93.503281ms)

                                                
                                                
-- stdout --
	multinode-893306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-893306-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-893306 status --alsologtostderr: exit status 7 (90.157686ms)

                                                
                                                
-- stdout --
	multinode-893306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-893306-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:25:36.587619 2079767 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:25:36.587780 2079767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:25:36.587790 2079767 out.go:304] Setting ErrFile to fd 2...
	I0328 00:25:36.587796 2079767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:25:36.588043 2079767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:25:36.588221 2079767 out.go:298] Setting JSON to false
	I0328 00:25:36.588257 2079767 mustload.go:65] Loading cluster: multinode-893306
	I0328 00:25:36.588358 2079767 notify.go:220] Checking for updates...
	I0328 00:25:36.588650 2079767 config.go:182] Loaded profile config "multinode-893306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:25:36.588669 2079767 status.go:255] checking status of multinode-893306 ...
	I0328 00:25:36.589157 2079767 cli_runner.go:164] Run: docker container inspect multinode-893306 --format={{.State.Status}}
	I0328 00:25:36.604648 2079767 status.go:330] multinode-893306 host status = "Stopped" (err=<nil>)
	I0328 00:25:36.604673 2079767 status.go:343] host is not running, skipping remaining checks
	I0328 00:25:36.604680 2079767 status.go:257] multinode-893306 status: &{Name:multinode-893306 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:25:36.604714 2079767 status.go:255] checking status of multinode-893306-m02 ...
	I0328 00:25:36.605003 2079767 cli_runner.go:164] Run: docker container inspect multinode-893306-m02 --format={{.State.Status}}
	I0328 00:25:36.620136 2079767 status.go:330] multinode-893306-m02 host status = "Stopped" (err=<nil>)
	I0328 00:25:36.620156 2079767 status.go:343] host is not running, skipping remaining checks
	I0328 00:25:36.620163 2079767 status.go:257] multinode-893306-m02 status: &{Name:multinode-893306-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-893306 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0328 00:26:21.471646 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-893306 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.054886487s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-893306 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-893306
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-893306-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-893306-m02 --driver=docker  --container-runtime=containerd: exit status 14 (89.282369ms)

                                                
                                                
-- stdout --
	* [multinode-893306-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-893306-m02' is duplicated with machine name 'multinode-893306-m02' in profile 'multinode-893306'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-893306-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-893306-m03 --driver=docker  --container-runtime=containerd: (31.26932314s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-893306
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-893306: exit status 80 (323.726022ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-893306 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-893306-m03 already exists in multinode-893306-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-893306-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-893306-m03: (2.142113371s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.88s)

                                                
                                    
x
+
TestPreload (108.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-585574 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-585574 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m10.293970392s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-585574 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-585574 image pull gcr.io/k8s-minikube/busybox: (1.338267764s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-585574
E0328 00:28:23.478769 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-585574: (12.066430257s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-585574 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-585574 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.000789966s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-585574 image list
helpers_test.go:175: Cleaning up "test-preload-585574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-585574
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-585574: (2.353925065s)
--- PASS: TestPreload (108.35s)

                                                
                                    
x
+
TestScheduledStopUnix (107.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-192059 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-192059 --memory=2048 --driver=docker  --container-runtime=containerd: (30.431864991s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-192059 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-192059 -n scheduled-stop-192059
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-192059 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-192059 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-192059 -n scheduled-stop-192059
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-192059
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-192059 --schedule 15s
E0328 00:29:58.427113 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-192059
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-192059: exit status 7 (74.739815ms)

                                                
                                                
-- stdout --
	scheduled-stop-192059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-192059 -n scheduled-stop-192059
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-192059 -n scheduled-stop-192059: exit status 7 (72.110958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-192059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-192059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-192059: (5.114990287s)
--- PASS: TestScheduledStopUnix (107.18s)

                                                
                                    
x
+
TestInsufficientStorage (10.1s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-477364 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-477364 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.669698849s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5767d036-271b-434f-b482-a8202a161bd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-477364] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c445170-f548-4c28-8d51-08a1d98b73d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18158"}}
	{"specversion":"1.0","id":"bfd2f3d1-3986-4954-b196-8e0af9a46790","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb82e193-f9aa-425f-8968-87f1502a2275","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig"}}
	{"specversion":"1.0","id":"f453b120-6cc2-429b-bf89-28ca815150dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube"}}
	{"specversion":"1.0","id":"c92dfeec-4eb6-4505-aae0-f1f4de713ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5eb06fb4-bbc1-4e52-8028-e6e64dd0f77b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4468fb2c-adad-41b6-a8ca-a9bd1b13934e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bb8f8ea6-6dc7-4756-a775-fa58d3e6a387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c2df8630-936e-48e9-996b-24a9e8582ada","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1756fd42-0c64-4a0d-b02c-f18b21a7acd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"09375284-6e56-4f40-9bae-0d4cd2267322","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-477364\" primary control-plane node in \"insufficient-storage-477364\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"37ea060f-bacd-4934-8f4b-2863f4a1d563","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-beta.0 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"581523c4-fb1b-48d2-b42e-bb67b4fa3fab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2386190a-9a9c-44eb-aef4-ac8da60b0827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-477364 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-477364 --output=json --layout=cluster: exit status 7 (281.579427ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-477364","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-477364","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:30:47.729842 2097344 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-477364" does not appear in /home/jenkins/minikube-integration/18158-1951721/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-477364 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-477364 --output=json --layout=cluster: exit status 7 (264.155072ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-477364","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-477364","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:30:47.990668 2097398 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-477364" does not appear in /home/jenkins/minikube-integration/18158-1951721/kubeconfig
	E0328 00:30:48.001179 2097398 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/insufficient-storage-477364/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-477364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-477364
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-477364: (1.882675673s)
--- PASS: TestInsufficientStorage (10.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1155891011 start -p running-upgrade-507887 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1155891011 start -p running-upgrade-507887 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.837533469s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-507887 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0328 00:36:26.529404 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-507887 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.751650627s)
helpers_test.go:175: Cleaning up "running-upgrade-507887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-507887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-507887: (2.199157053s)
--- PASS: TestRunningBinaryUpgrade (84.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (373.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-490800 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-490800 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.419899769s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-490800
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-490800: (1.926863972s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-490800 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-490800 status --format={{.Host}}: exit status 7 (111.628642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-490800 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0328 00:33:23.489795 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-490800 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m55.287933909s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-490800 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-490800 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-490800 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (91.638822ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-490800] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-490800
	    minikube start -p kubernetes-upgrade-490800 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4908002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-490800 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-490800 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-490800 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (14.076938142s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-490800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-490800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-490800: (2.175759015s)
--- PASS: TestKubernetesUpgrade (373.23s)

                                                
                                    
x
+
TestMissingContainerUpgrade (159.57s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.973517802 start -p missing-upgrade-502162 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.973517802 start -p missing-upgrade-502162 --memory=2200 --driver=docker  --container-runtime=containerd: (1m27.598361289s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-502162
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-502162
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-502162 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-502162 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.806375804s)
helpers_test.go:175: Cleaning up "missing-upgrade-502162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-502162
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-502162: (2.279950273s)
--- PASS: TestMissingContainerUpgrade (159.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-600722 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-600722 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (126.683019ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-600722] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-600722 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-600722 --driver=docker  --container-runtime=containerd: (38.929779841s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-600722 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-600722 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-600722 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.362924714s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-600722 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-600722 status -o json: exit status 2 (301.936308ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-600722","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-600722
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-600722: (1.827846215s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-600722 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-600722 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.963251466s)
--- PASS: TestNoKubernetes/serial/Start (5.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-600722 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-600722 "sudo systemctl is-active --quiet service kubelet": exit status 1 (328.93802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-600722
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-600722: (1.270815593s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-600722 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-600722 --driver=docker  --container-runtime=containerd: (7.859693861s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-600722 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-600722 "sudo systemctl is-active --quiet service kubelet": exit status 1 (351.849404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (112.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3697836237 start -p stopped-upgrade-804881 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3697836237 start -p stopped-upgrade-804881 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.027237084s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3697836237 -p stopped-upgrade-804881 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3697836237 -p stopped-upgrade-804881 stop: (20.033196167s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-804881 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0328 00:34:58.425247 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-804881 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.999697043s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (112.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-804881
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-804881: (1.445156131s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                    
x
+
TestPause/serial/Start (92.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-982807 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-982807 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.129970088s)
--- PASS: TestPause/serial/Start (92.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-982807 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0328 00:38:23.478510 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-982807 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.341400046s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-982807 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-982807 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-982807 --output=json --layout=cluster: exit status 2 (390.285377ms)

                                                
                                                
-- stdout --
	{"Name":"pause-982807","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-982807","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-982807 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.85s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-982807 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-982807 --alsologtostderr -v=5: (1.044575448s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-982807 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-982807 --alsologtostderr -v=5: (2.806473895s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-982807
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-982807: exit status 1 (15.154297ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-982807: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-588617 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-588617 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (314.692833ms)

                                                
                                                
-- stdout --
	* [false-588617] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:38:41.735659 2137802 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:38:41.735787 2137802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:38:41.735848 2137802 out.go:304] Setting ErrFile to fd 2...
	I0328 00:38:41.735854 2137802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:38:41.736097 2137802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18158-1951721/.minikube/bin
	I0328 00:38:41.736512 2137802 out.go:298] Setting JSON to false
	I0328 00:38:41.737505 2137802 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30060,"bootTime":1711556262,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 00:38:41.737581 2137802 start.go:139] virtualization:  
	I0328 00:38:41.741441 2137802 out.go:177] * [false-588617] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 00:38:41.744217 2137802 out.go:177]   - MINIKUBE_LOCATION=18158
	I0328 00:38:41.750473 2137802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:38:41.752791 2137802 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18158-1951721/kubeconfig
	I0328 00:38:41.750107 2137802 notify.go:220] Checking for updates...
	I0328 00:38:41.758816 2137802 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18158-1951721/.minikube
	I0328 00:38:41.761495 2137802 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 00:38:41.764073 2137802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:38:41.767127 2137802 config.go:182] Loaded profile config "force-systemd-flag-608052": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 00:38:41.767233 2137802 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:38:41.806039 2137802 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 00:38:41.806155 2137802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 00:38:41.943975 2137802 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 00:38:41.917694741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 00:38:41.944090 2137802 docker.go:295] overlay module found
	I0328 00:38:41.946957 2137802 out.go:177] * Using the docker driver based on user configuration
	I0328 00:38:41.949672 2137802 start.go:297] selected driver: docker
	I0328 00:38:41.949692 2137802 start.go:901] validating driver "docker" against <nil>
	I0328 00:38:41.949707 2137802 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:38:41.952976 2137802 out.go:177] 
	W0328 00:38:41.955339 2137802 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0328 00:38:41.957800 2137802 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-588617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-588617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-588617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588617"

                                                
                                                
----------------------- debugLogs end: false-588617 [took: 5.631136577s] --------------------------------
helpers_test.go:175: Cleaning up "false-588617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-588617
--- PASS: TestNetworkPlugins/group/false (6.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-847679 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-847679 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m26.264816091s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-847679 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1862520e-6ef9-4d12-ba1e-c4e9508a28d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1862520e-6ef9-4d12-ba1e-c4e9508a28d1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004571529s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-847679 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-137753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-137753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (1m10.447770691s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-847679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-847679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.525675007s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-847679 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-847679 --alsologtostderr -v=3
E0328 00:43:01.471883 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-847679 --alsologtostderr -v=3: (14.900738955s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-847679 -n old-k8s-version-847679
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-847679 -n old-k8s-version-847679: exit status 7 (106.929504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-847679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-137753 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c58dc6d1-0a0f-4526-9dde-f11409ba35d0] Pending
helpers_test.go:344: "busybox" [c58dc6d1-0a0f-4526-9dde-f11409ba35d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c58dc6d1-0a0f-4526-9dde-f11409ba35d0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003258333s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-137753 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-137753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-137753 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-137753 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-137753 --alsologtostderr -v=3: (12.06604387s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-137753 -n no-preload-137753
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-137753 -n no-preload-137753: exit status 7 (85.949201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-137753 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-137753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0328 00:44:58.425425 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:48:23.478530 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-137753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (4m26.265680816s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-137753 -n no-preload-137753
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8wlqr" [ee48c286-41b4-4258-b6a8-d19bc659ac20] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003446641s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8wlqr" [ee48c286-41b4-4258-b6a8-d19bc659ac20] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003872565s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-137753 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-137753 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-137753 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-137753 -n no-preload-137753
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-137753 -n no-preload-137753: exit status 2 (322.616948ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-137753 -n no-preload-137753
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-137753 -n no-preload-137753: exit status 2 (339.790096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-137753 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-137753 -n no-preload-137753
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-137753 -n no-preload-137753
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-705455 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-705455 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m21.610219224s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jqzw6" [8b976334-d096-4733-b356-fd699493f106] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003394817s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jqzw6" [8b976334-d096-4733-b356-fd699493f106] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004768574s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-847679 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-847679 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-847679 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-847679 -n old-k8s-version-847679
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-847679 -n old-k8s-version-847679: exit status 2 (338.765684ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-847679 -n old-k8s-version-847679
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-847679 -n old-k8s-version-847679: exit status 2 (370.838608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-847679 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-847679 --alsologtostderr -v=1: (1.072400614s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-847679 -n old-k8s-version-847679
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-847679 -n old-k8s-version-847679
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-164287 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0328 00:49:58.425484 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-164287 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m30.891126895s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-705455 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d58059e1-84d0-4fb5-866c-223c76f62d22] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d58059e1-84d0-4fb5-866c-223c76f62d22] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.005114202s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-705455 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-705455 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-705455 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-705455 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-705455 --alsologtostderr -v=3: (12.064733778s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-705455 -n embed-certs-705455
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-705455 -n embed-certs-705455: exit status 7 (85.349108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-705455 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (281.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-705455 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-705455 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m40.924281021s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-705455 -n embed-certs-705455
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (281.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-164287 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [873e3d7e-6bbb-40c7-9c69-df75eb9dcd0b] Pending
helpers_test.go:344: "busybox" [873e3d7e-6bbb-40c7-9c69-df75eb9dcd0b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [873e3d7e-6bbb-40c7-9c69-df75eb9dcd0b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004415782s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-164287 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-164287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-164287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.31420104s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-164287 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-164287 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-164287 --alsologtostderr -v=3: (12.157566914s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287: exit status 7 (78.554381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-164287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-164287 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0328 00:52:39.575345 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:39.580552 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:39.590786 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:39.611101 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:39.651295 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:39.731764 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:39.892203 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:40.212776 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:40.853583 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:42.134110 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:44.695819 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:52:49.816836 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:53:00.064571 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:53:06.530139 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:53:20.544995 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:53:23.478835 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
E0328 00:53:56.953012 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:56.958265 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:56.968853 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:56.989131 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:57.029472 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:57.109792 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:57.270019 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:57.590547 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:58.230891 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:53:59.511661 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:54:01.505716 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
E0328 00:54:02.071956 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:54:07.192548 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:54:17.432970 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:54:37.913484 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:54:58.424990 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
E0328 00:55:18.874291 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:55:23.426056 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-164287 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m38.878695714s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wzjvs" [0ae52e28-0da7-48df-ac69-52d5af9c44d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004156894s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wzjvs" [0ae52e28-0da7-48df-ac69-52d5af9c44d7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004779908s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-705455 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-705455 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-705455 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-705455 -n embed-certs-705455
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-705455 -n embed-certs-705455: exit status 2 (332.489914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-705455 -n embed-certs-705455
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-705455 -n embed-certs-705455: exit status 2 (331.407361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-705455 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-705455 -n embed-certs-705455
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-705455 -n embed-certs-705455
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-399317 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-399317 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (47.34895668s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2pck5" [0eb982ad-3bb7-4124-a749-7c5b95a74dc4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004112891s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2pck5" [0eb982ad-3bb7-4124-a749-7c5b95a74dc4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004581256s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-164287 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-164287 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-164287 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287: exit status 2 (344.17311ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287: exit status 2 (363.524996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-164287 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-164287 -n default-k8s-diff-port-164287
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m27.941723963s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-399317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-399317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.402981984s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-399317 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-399317 --alsologtostderr -v=3: (3.16263206s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-399317 -n newest-cni-399317
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-399317 -n newest-cni-399317: exit status 7 (118.636834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-399317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-399317 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0328 00:56:40.794672 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-399317 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (24.099610149s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-399317 -n newest-cni-399317
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-399317 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-399317 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-399317 --alsologtostderr -v=1: (1.011374898s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-399317 -n newest-cni-399317
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-399317 -n newest-cni-399317: exit status 2 (408.657853ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-399317 -n newest-cni-399317
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-399317 -n newest-cni-399317: exit status 2 (390.037448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-399317 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-399317 -n newest-cni-399317
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-399317 -n newest-cni-399317
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.59s)
E0328 01:02:58.660642 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:02:58.665930 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:02:58.676111 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:02:58.696411 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:02:58.736863 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:02:58.817112 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:02:58.977443 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:02:59.297947 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:02:59.938779 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:03:01.219510 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:03:03.780046 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:03:08.900873 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory
E0328 01:03:19.142240 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/auto-588617/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0328 00:57:39.575449 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m32.358708639s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-588617 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-588617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-75r8x" [11e4b1d1-9d3d-408f-9249-20db5621bf35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-75r8x" [11e4b1d1-9d3d-408f-9249-20db5621bf35] Running
E0328 00:58:07.266408 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004445098s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-588617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m15.503930519s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dvwp2" [2f34e48d-7a26-42a2-9cee-39e605e99920] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004033457s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-588617 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-588617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fnmjb" [5508ba8d-88a4-4db7-b9cc-254186c06783] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fnmjb" [5508ba8d-88a4-4db7-b9cc-254186c06783] Running
E0328 00:58:56.952840 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005658923s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-588617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0328 00:59:24.634869 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/no-preload-137753/client.crt: no such file or directory
E0328 00:59:41.472842 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m7.720373774s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6mcjv" [6f3c11c4-10a5-47ba-9662-a496a9453c80] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006824812s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-588617 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-588617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8knnt" [0bd6aa61-413e-4817-b2cb-65983fa8ebfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8knnt" [0bd6aa61-413e-4817-b2cb-65983fa8ebfa] Running
E0328 00:59:58.425400 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/functional-197628/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004529123s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-588617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m32.459492867s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-588617 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-588617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5hmst" [55605b46-f61f-43f3-a1c8-e30be17fbc6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5hmst" [55605b46-f61f-43f3-a1c8-e30be17fbc6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004690402s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-588617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0328 01:01:09.727472 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:09.732742 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:09.743410 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:09.763670 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:09.803899 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:09.884172 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:10.044619 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:10.365582 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:11.005847 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:12.286155 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:14.846511 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:19.966905 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:30.207395 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
E0328 01:01:50.687695 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/default-k8s-diff-port-164287/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.232975568s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-588617 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-588617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9jvtx" [6be66071-8ad6-496f-9edf-55bb42916777] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9jvtx" [6be66071-8ad6-496f-9edf-55bb42916777] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005520366s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bjktz" [a56cee0e-a87b-41bb-beae-43a0e9281681] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00471044s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-588617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-588617 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-588617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n957s" [5633c861-5c49-487a-9e77-9d86ee46d92c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n957s" [5633c861-5c49-487a-9e77-9d86ee46d92c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004871408s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-588617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0328 01:02:39.575036 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/old-k8s-version-847679/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-588617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (49.341181664s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-588617 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-588617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zbfhc" [d041f9ab-4da4-4b57-8384-2271455724ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0328 01:03:23.479708 1957141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18158-1951721/.minikube/profiles/addons-482679/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zbfhc" [d041f9ab-4da4-4b57-8384-2271455724ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003979167s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-588617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-588617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-147301 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-147301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-147301
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-156881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-156881
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-588617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-588617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-588617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588617"

                                                
                                                
----------------------- debugLogs end: kubenet-588617 [took: 4.903359878s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-588617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-588617
--- SKIP: TestNetworkPlugins/group/kubenet (5.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-588617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-588617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-588617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-588617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588617"

                                                
                                                
----------------------- debugLogs end: cilium-588617 [took: 5.614434113s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-588617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-588617
--- SKIP: TestNetworkPlugins/group/cilium (5.83s)

                                                
                                    
Copied to clipboard