Test Report: Docker_Linux_containerd_arm64 17967

                    
                      10ecd0aeb1ec35670d13066c60edb6e287060cba:2024-01-16:32725
                    
                

Test fail (8/320)

x
+
TestAddons/parallel/Ingress (36.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-843965 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-843965 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-843965 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [95845dfb-c257-44a4-9e07-cc13da067b19] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [95845dfb-c257-44a4-9e07-cc13da067b19] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003790276s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-843965 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.069707805s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-843965 addons disable ingress-dns --alsologtostderr -v=1: (1.625647338s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-843965 addons disable ingress --alsologtostderr -v=1: (7.767283982s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-843965
helpers_test.go:235: (dbg) docker inspect addons-843965:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0",
	        "Created": "2024-01-16T02:55:00.86248583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1892575,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T02:55:01.184798336Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0/hostname",
	        "HostsPath": "/var/lib/docker/containers/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0/hosts",
	        "LogPath": "/var/lib/docker/containers/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0-json.log",
	        "Name": "/addons-843965",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-843965:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-843965",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2157af1e163464c740e3a071e36883785e37de9b175ba968e06bb16d5c79b14e-init/diff:/var/lib/docker/overlay2/261e7c2ec33123e281bd6870ab3b0bda4a6870d39bd5f5e877084941df0b6b78/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2157af1e163464c740e3a071e36883785e37de9b175ba968e06bb16d5c79b14e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2157af1e163464c740e3a071e36883785e37de9b175ba968e06bb16d5c79b14e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2157af1e163464c740e3a071e36883785e37de9b175ba968e06bb16d5c79b14e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-843965",
	                "Source": "/var/lib/docker/volumes/addons-843965/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-843965",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-843965",
	                "name.minikube.sigs.k8s.io": "addons-843965",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "71519a76ae7b8526ca61cef33e7b5afbdbeb9f2ef2e9b81aad28660efab78e1c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35022"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35019"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35021"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35020"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/71519a76ae7b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-843965": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "957975e94f70",
	                        "addons-843965"
	                    ],
	                    "NetworkID": "c66612f51545ad0e83b9184eae5568eb04ff39420657456eeae92cdcba98b2d9",
	                    "EndpointID": "45a141facfe91b301d1f8b5c2b54dd492b90e66b665e215b23d0573fc06ab2f5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-843965 -n addons-843965
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-843965 logs -n 25: (1.554341248s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-807644                                                                     | download-only-807644   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| delete  | -p download-only-111300                                                                     | download-only-111300   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| delete  | -p download-only-795548                                                                     | download-only-795548   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| start   | --download-only -p                                                                          | download-docker-734822 | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | download-docker-734822                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-734822                                                                   | download-docker-734822 | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-337521   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | binary-mirror-337521                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34529                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-337521                                                                     | binary-mirror-337521   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | addons-843965                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | addons-843965                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-843965 --wait=true                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-843965 ip                                                                            | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:57 UTC |
	| addons  | addons-843965 addons disable                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:57 UTC |
	|         | -p addons-843965                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-843965 ssh cat                                                                       | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:57 UTC |
	|         | /opt/local-path-provisioner/pvc-7b134c94-38a8-4396-b5f8-502ac0f0b814_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-843965 addons disable                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-843965 addons                                                                        | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC |                     |
	|         | addons-843965                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | -p addons-843965                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-843965 addons                                                                        | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-843965 addons                                                                        | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | addons-843965                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-843965 ssh curl -s                                                                   | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-843965 ip                                                                            | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	| addons  | addons-843965 addons disable                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:59 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-843965 addons disable                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:54:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:54:53.982631 1892116 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:54:53.982843 1892116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:54:53.982873 1892116 out.go:309] Setting ErrFile to fd 2...
	I0116 02:54:53.982894 1892116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:54:53.983172 1892116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 02:54:53.983655 1892116 out.go:303] Setting JSON to false
	I0116 02:54:53.984570 1892116 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34630,"bootTime":1705339064,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 02:54:53.984678 1892116 start.go:138] virtualization:  
	I0116 02:54:53.987343 1892116 out.go:177] * [addons-843965] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 02:54:53.989970 1892116 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:54:53.990108 1892116 notify.go:220] Checking for updates...
	I0116 02:54:53.994514 1892116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:54:53.996764 1892116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 02:54:53.998946 1892116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 02:54:54.003937 1892116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 02:54:54.006071 1892116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:54:54.008130 1892116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:54:54.031881 1892116 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:54:54.032010 1892116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:54:54.114602 1892116 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-16 02:54:54.104802624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:54:54.114732 1892116 docker.go:295] overlay module found
	I0116 02:54:54.116872 1892116 out.go:177] * Using the docker driver based on user configuration
	I0116 02:54:54.118745 1892116 start.go:298] selected driver: docker
	I0116 02:54:54.118759 1892116 start.go:902] validating driver "docker" against <nil>
	I0116 02:54:54.118772 1892116 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:54:54.119461 1892116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:54:54.179982 1892116 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-16 02:54:54.170247676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:54:54.180142 1892116 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:54:54.180394 1892116 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:54:54.182151 1892116 out.go:177] * Using Docker driver with root privileges
	I0116 02:54:54.183728 1892116 cni.go:84] Creating CNI manager for ""
	I0116 02:54:54.183750 1892116 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:54:54.183762 1892116 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:54:54.183773 1892116 start_flags.go:321] config:
	{Name:addons-843965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-843965 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:54:54.185748 1892116 out.go:177] * Starting control plane node addons-843965 in cluster addons-843965
	I0116 02:54:54.187393 1892116 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0116 02:54:54.189042 1892116 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:54:54.190727 1892116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:54:54.190781 1892116 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0116 02:54:54.190805 1892116 cache.go:56] Caching tarball of preloaded images
	I0116 02:54:54.190817 1892116 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:54:54.190882 1892116 preload.go:174] Found /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0116 02:54:54.190892 1892116 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0116 02:54:54.191246 1892116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/config.json ...
	I0116 02:54:54.191279 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/config.json: {Name:mk31bcf33447fff82611ee0607a5f06e45495f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:54:54.208782 1892116 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 02:54:54.208808 1892116 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 02:54:54.208831 1892116 cache.go:194] Successfully downloaded all kic artifacts
	I0116 02:54:54.208890 1892116 start.go:365] acquiring machines lock for addons-843965: {Name:mkc6ac54037945c19e3ff2dd20ef63e1ab89dd31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:54:54.209021 1892116 start.go:369] acquired machines lock for "addons-843965" in 111.767µs
	I0116 02:54:54.209047 1892116 start.go:93] Provisioning new machine with config: &{Name:addons-843965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-843965 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 02:54:54.209127 1892116 start.go:125] createHost starting for "" (driver="docker")
	I0116 02:54:54.211622 1892116 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0116 02:54:54.211882 1892116 start.go:159] libmachine.API.Create for "addons-843965" (driver="docker")
	I0116 02:54:54.211935 1892116 client.go:168] LocalClient.Create starting
	I0116 02:54:54.212053 1892116 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem
	I0116 02:54:54.964023 1892116 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem
	I0116 02:54:55.169919 1892116 cli_runner.go:164] Run: docker network inspect addons-843965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 02:54:55.190812 1892116 cli_runner.go:211] docker network inspect addons-843965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 02:54:55.190909 1892116 network_create.go:281] running [docker network inspect addons-843965] to gather additional debugging logs...
	I0116 02:54:55.190934 1892116 cli_runner.go:164] Run: docker network inspect addons-843965
	W0116 02:54:55.208024 1892116 cli_runner.go:211] docker network inspect addons-843965 returned with exit code 1
	I0116 02:54:55.208060 1892116 network_create.go:284] error running [docker network inspect addons-843965]: docker network inspect addons-843965: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-843965 not found
	I0116 02:54:55.208073 1892116 network_create.go:286] output of [docker network inspect addons-843965]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-843965 not found
	
	** /stderr **
	I0116 02:54:55.208167 1892116 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:54:55.228585 1892116 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024cb360}
	I0116 02:54:55.228628 1892116 network_create.go:124] attempt to create docker network addons-843965 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 02:54:55.228689 1892116 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-843965 addons-843965
	I0116 02:54:55.321169 1892116 network_create.go:108] docker network addons-843965 192.168.49.0/24 created
	I0116 02:54:55.321202 1892116 kic.go:121] calculated static IP "192.168.49.2" for the "addons-843965" container
	I0116 02:54:55.321278 1892116 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 02:54:55.344148 1892116 cli_runner.go:164] Run: docker volume create addons-843965 --label name.minikube.sigs.k8s.io=addons-843965 --label created_by.minikube.sigs.k8s.io=true
	I0116 02:54:55.368295 1892116 oci.go:103] Successfully created a docker volume addons-843965
	I0116 02:54:55.368380 1892116 cli_runner.go:164] Run: docker run --rm --name addons-843965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-843965 --entrypoint /usr/bin/test -v addons-843965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 02:54:56.564693 1892116 cli_runner.go:217] Completed: docker run --rm --name addons-843965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-843965 --entrypoint /usr/bin/test -v addons-843965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.196272123s)
	I0116 02:54:56.564725 1892116 oci.go:107] Successfully prepared a docker volume addons-843965
	I0116 02:54:56.564754 1892116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:54:56.564776 1892116 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 02:54:56.564864 1892116 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-843965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 02:55:00.776081 1892116 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-843965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.21116914s)
	I0116 02:55:00.776115 1892116 kic.go:203] duration metric: took 4.211336 seconds to extract preloaded images to volume
	W0116 02:55:00.776258 1892116 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 02:55:00.776402 1892116 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 02:55:00.844475 1892116 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-843965 --name addons-843965 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-843965 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-843965 --network addons-843965 --ip 192.168.49.2 --volume addons-843965:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 02:55:01.194469 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Running}}
	I0116 02:55:01.224105 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:01.259127 1892116 cli_runner.go:164] Run: docker exec addons-843965 stat /var/lib/dpkg/alternatives/iptables
	I0116 02:55:01.328915 1892116 oci.go:144] the created container "addons-843965" has a running status.
	I0116 02:55:01.328947 1892116 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa...
	I0116 02:55:01.834881 1892116 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 02:55:01.866408 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:01.900012 1892116 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 02:55:01.900041 1892116 kic_runner.go:114] Args: [docker exec --privileged addons-843965 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 02:55:01.983792 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:02.017410 1892116 machine.go:88] provisioning docker machine ...
	I0116 02:55:02.017478 1892116 ubuntu.go:169] provisioning hostname "addons-843965"
	I0116 02:55:02.017548 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:02.067745 1892116 main.go:141] libmachine: Using SSH client type: native
	I0116 02:55:02.068212 1892116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35023 <nil> <nil>}
	I0116 02:55:02.068232 1892116 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-843965 && echo "addons-843965" | sudo tee /etc/hostname
	I0116 02:55:02.247132 1892116 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-843965
	
	I0116 02:55:02.247224 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:02.267119 1892116 main.go:141] libmachine: Using SSH client type: native
	I0116 02:55:02.267536 1892116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35023 <nil> <nil>}
	I0116 02:55:02.267558 1892116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-843965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-843965/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-843965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:55:02.411819 1892116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:55:02.411849 1892116 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17967-1885793/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-1885793/.minikube}
	I0116 02:55:02.411885 1892116 ubuntu.go:177] setting up certificates
	I0116 02:55:02.411896 1892116 provision.go:83] configureAuth start
	I0116 02:55:02.411977 1892116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-843965
	I0116 02:55:02.431073 1892116 provision.go:138] copyHostCerts
	I0116 02:55:02.431165 1892116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.pem (1078 bytes)
	I0116 02:55:02.431347 1892116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/cert.pem (1123 bytes)
	I0116 02:55:02.431453 1892116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/key.pem (1679 bytes)
	I0116 02:55:02.431531 1892116 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem org=jenkins.addons-843965 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-843965]
	I0116 02:55:02.952608 1892116 provision.go:172] copyRemoteCerts
	I0116 02:55:02.952694 1892116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:55:02.952738 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:02.972559 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.071821 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:55:03.100569 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 02:55:03.129868 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:55:03.158693 1892116 provision.go:86] duration metric: configureAuth took 746.778527ms
	I0116 02:55:03.158735 1892116 ubuntu.go:193] setting minikube options for container-runtime
	I0116 02:55:03.158926 1892116 config.go:182] Loaded profile config "addons-843965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:55:03.158939 1892116 machine.go:91] provisioned docker machine in 1.14150702s
	I0116 02:55:03.158946 1892116 client.go:171] LocalClient.Create took 8.947003804s
	I0116 02:55:03.158968 1892116 start.go:167] duration metric: libmachine.API.Create for "addons-843965" took 8.947087412s
	I0116 02:55:03.158981 1892116 start.go:300] post-start starting for "addons-843965" (driver="docker")
	I0116 02:55:03.158990 1892116 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:55:03.159043 1892116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:55:03.159089 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:03.177044 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.276430 1892116 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:55:03.280762 1892116 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 02:55:03.280849 1892116 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 02:55:03.280869 1892116 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 02:55:03.280880 1892116 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 02:55:03.280891 1892116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-1885793/.minikube/addons for local assets ...
	I0116 02:55:03.280970 1892116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-1885793/.minikube/files for local assets ...
	I0116 02:55:03.281000 1892116 start.go:303] post-start completed in 122.014124ms
	I0116 02:55:03.281303 1892116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-843965
	I0116 02:55:03.299093 1892116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/config.json ...
	I0116 02:55:03.299376 1892116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:55:03.299435 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:03.318138 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.411511 1892116 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 02:55:03.417367 1892116 start.go:128] duration metric: createHost completed in 9.20822512s
	I0116 02:55:03.417390 1892116 start.go:83] releasing machines lock for "addons-843965", held for 9.208361099s
	I0116 02:55:03.417480 1892116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-843965
	I0116 02:55:03.435237 1892116 ssh_runner.go:195] Run: cat /version.json
	I0116 02:55:03.435300 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:03.435545 1892116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:55:03.435611 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:03.460290 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.460953 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.558805 1892116 ssh_runner.go:195] Run: systemctl --version
	I0116 02:55:03.695325 1892116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:55:03.701099 1892116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0116 02:55:03.730065 1892116 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0116 02:55:03.730158 1892116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:55:03.763496 1892116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 02:55:03.763515 1892116 start.go:475] detecting cgroup driver to use...
	I0116 02:55:03.763545 1892116 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 02:55:03.763593 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 02:55:03.777393 1892116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:55:03.790091 1892116 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:55:03.790199 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:55:03.806387 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:55:03.822379 1892116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:55:03.912719 1892116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:55:04.015594 1892116 docker.go:233] disabling docker service ...
	I0116 02:55:04.015688 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:55:04.038615 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:55:04.052792 1892116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:55:04.159590 1892116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:55:04.255107 1892116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:55:04.269231 1892116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:55:04.290247 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 02:55:04.304398 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 02:55:04.317293 1892116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 02:55:04.317379 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 02:55:04.329536 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:55:04.342311 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 02:55:04.354955 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:55:04.367264 1892116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:55:04.379493 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 02:55:04.392026 1892116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:55:04.402990 1892116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:55:04.414692 1892116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:55:04.530217 1892116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 02:55:04.691114 1892116 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0116 02:55:04.691238 1892116 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0116 02:55:04.696064 1892116 start.go:543] Will wait 60s for crictl version
	I0116 02:55:04.696175 1892116 ssh_runner.go:195] Run: which crictl
	I0116 02:55:04.700578 1892116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:55:04.745171 1892116 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0116 02:55:04.745293 1892116 ssh_runner.go:195] Run: containerd --version
	I0116 02:55:04.781244 1892116 ssh_runner.go:195] Run: containerd --version
	I0116 02:55:04.817162 1892116 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0116 02:55:04.818790 1892116 cli_runner.go:164] Run: docker network inspect addons-843965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:55:04.835837 1892116 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 02:55:04.840319 1892116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:55:04.853406 1892116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:55:04.853587 1892116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:55:04.896139 1892116 containerd.go:612] all images are preloaded for containerd runtime.
	I0116 02:55:04.896165 1892116 containerd.go:519] Images already preloaded, skipping extraction
	I0116 02:55:04.896234 1892116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:55:04.934124 1892116 containerd.go:612] all images are preloaded for containerd runtime.
	I0116 02:55:04.934148 1892116 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:55:04.934204 1892116 ssh_runner.go:195] Run: sudo crictl info
	I0116 02:55:04.974382 1892116 cni.go:84] Creating CNI manager for ""
	I0116 02:55:04.974408 1892116 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:55:04.974464 1892116 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:55:04.974497 1892116 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-843965 NodeName:addons-843965 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:55:04.974643 1892116 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-843965"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:55:04.974712 1892116 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-843965 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-843965 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:55:04.974777 1892116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:55:04.985383 1892116 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:55:04.985471 1892116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:55:04.995828 1892116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0116 02:55:05.020096 1892116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:55:05.043537 1892116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0116 02:55:05.066048 1892116 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 02:55:05.070956 1892116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:55:05.085006 1892116 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965 for IP: 192.168.49.2
	I0116 02:55:05.085040 1892116 certs.go:190] acquiring lock for shared ca certs: {Name:mk53d39e364f11aa45d491413f4acdef0406f659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:05.085903 1892116 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key
	I0116 02:55:05.566600 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt ...
	I0116 02:55:05.566632 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt: {Name:mkdc5ed6571f50d2e0aab8c7fed4eb3fb81c1731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:05.566826 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key ...
	I0116 02:55:05.566841 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key: {Name:mkd5624a3d41975891289b1ea898068bb8950d9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:05.566927 1892116 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key
	I0116 02:55:06.098738 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.crt ...
	I0116 02:55:06.098768 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.crt: {Name:mk09ac82365c28dac5db824c5d79ac4ca94b7a85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.098954 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key ...
	I0116 02:55:06.098965 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key: {Name:mk274d84808802a3d8948cc4330d55c86d0481be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.099099 1892116 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.key
	I0116 02:55:06.099118 1892116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt with IP's: []
	I0116 02:55:06.684553 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt ...
	I0116 02:55:06.684586 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: {Name:mkc0857303b93f77dcd17b744c4a61aeb7ad070e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.685530 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.key ...
	I0116 02:55:06.685551 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.key: {Name:mk17c81f0815c44f212d704f18242ef523c5ddfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.686226 1892116 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key.dd3b5fb2
	I0116 02:55:06.686254 1892116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:55:06.823278 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt.dd3b5fb2 ...
	I0116 02:55:06.823308 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt.dd3b5fb2: {Name:mk41672ca4d63f94f04fd9d08f2d8af03af51a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.823520 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key.dd3b5fb2 ...
	I0116 02:55:06.823537 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key.dd3b5fb2: {Name:mk6f49ed1d7ec48ea445470d369ab62d1d740e43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.823633 1892116 certs.go:337] copying /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt
	I0116 02:55:06.823716 1892116 certs.go:341] copying /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key
	I0116 02:55:06.823772 1892116 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.key
	I0116 02:55:06.823792 1892116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.crt with IP's: []
	I0116 02:55:07.360436 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.crt ...
	I0116 02:55:07.360472 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.crt: {Name:mk190483ecdd5fa8b455db472a83c2adff797c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:07.361304 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.key ...
	I0116 02:55:07.361324 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.key: {Name:mk33a944a0e28c2cdb9a5b4915ed65a60ebf8883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:07.362071 1892116 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:55:07.362127 1892116 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:55:07.362181 1892116 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:55:07.362220 1892116 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem (1679 bytes)
	I0116 02:55:07.362861 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:55:07.393170 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 02:55:07.423315 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:55:07.454769 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 02:55:07.484226 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:55:07.513741 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 02:55:07.542937 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:55:07.571695 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 02:55:07.600808 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:55:07.629580 1892116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:55:07.650745 1892116 ssh_runner.go:195] Run: openssl version
	I0116 02:55:07.657899 1892116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:55:07.669177 1892116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:55:07.673807 1892116 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:55 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:55:07.673889 1892116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:55:07.682345 1892116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:55:07.693617 1892116 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:55:07.697866 1892116 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:55:07.697911 1892116 kubeadm.go:404] StartCluster: {Name:addons-843965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-843965 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:55:07.698034 1892116 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0116 02:55:07.698095 1892116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:55:07.740144 1892116 cri.go:89] found id: ""
	I0116 02:55:07.740216 1892116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:55:07.750820 1892116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:55:07.761422 1892116 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 02:55:07.761520 1892116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:55:07.772351 1892116 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:55:07.772403 1892116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 02:55:07.872338 1892116 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 02:55:07.957511 1892116 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:55:26.102078 1892116 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:55:26.102133 1892116 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:55:26.102215 1892116 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 02:55:26.102267 1892116 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 02:55:26.102300 1892116 kubeadm.go:322] OS: Linux
	I0116 02:55:26.102342 1892116 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 02:55:26.102396 1892116 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 02:55:26.102442 1892116 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 02:55:26.102487 1892116 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 02:55:26.102532 1892116 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 02:55:26.102577 1892116 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 02:55:26.102619 1892116 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0116 02:55:26.102664 1892116 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0116 02:55:26.102707 1892116 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0116 02:55:26.102775 1892116 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:55:26.102865 1892116 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:55:26.102952 1892116 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:55:26.103010 1892116 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:55:26.105033 1892116 out.go:204]   - Generating certificates and keys ...
	I0116 02:55:26.105199 1892116 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:55:26.105285 1892116 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:55:26.105390 1892116 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:55:26.105545 1892116 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:55:26.105610 1892116 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:55:26.105661 1892116 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:55:26.105715 1892116 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:55:26.105842 1892116 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-843965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 02:55:26.105899 1892116 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:55:26.106014 1892116 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-843965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 02:55:26.106080 1892116 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:55:26.106144 1892116 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:55:26.106189 1892116 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:55:26.106244 1892116 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:55:26.106296 1892116 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:55:26.106349 1892116 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:55:26.106419 1892116 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:55:26.106474 1892116 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:55:26.106557 1892116 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:55:26.106623 1892116 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:55:26.108634 1892116 out.go:204]   - Booting up control plane ...
	I0116 02:55:26.108738 1892116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:55:26.108816 1892116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:55:26.108882 1892116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:55:26.108991 1892116 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:55:26.109075 1892116 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:55:26.109115 1892116 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:55:26.109269 1892116 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:55:26.109345 1892116 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.006829 seconds
	I0116 02:55:26.109469 1892116 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:55:26.109728 1892116 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:55:26.109799 1892116 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:55:26.109988 1892116 kubeadm.go:322] [mark-control-plane] Marking the node addons-843965 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:55:26.110045 1892116 kubeadm.go:322] [bootstrap-token] Using token: 5ccrjl.pay5uy3xwb94lc61
	I0116 02:55:26.111921 1892116 out.go:204]   - Configuring RBAC rules ...
	I0116 02:55:26.112030 1892116 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:55:26.112115 1892116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:55:26.112255 1892116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:55:26.112383 1892116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:55:26.112503 1892116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:55:26.112597 1892116 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:55:26.112712 1892116 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:55:26.112755 1892116 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:55:26.112800 1892116 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:55:26.112805 1892116 kubeadm.go:322] 
	I0116 02:55:26.112865 1892116 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:55:26.112870 1892116 kubeadm.go:322] 
	I0116 02:55:26.112947 1892116 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:55:26.112952 1892116 kubeadm.go:322] 
	I0116 02:55:26.112977 1892116 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:55:26.113036 1892116 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:55:26.113087 1892116 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:55:26.113091 1892116 kubeadm.go:322] 
	I0116 02:55:26.113145 1892116 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:55:26.113150 1892116 kubeadm.go:322] 
	I0116 02:55:26.113198 1892116 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:55:26.113202 1892116 kubeadm.go:322] 
	I0116 02:55:26.113255 1892116 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:55:26.113330 1892116 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:55:26.113409 1892116 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:55:26.113414 1892116 kubeadm.go:322] 
	I0116 02:55:26.113698 1892116 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:55:26.113807 1892116 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:55:26.113836 1892116 kubeadm.go:322] 
	I0116 02:55:26.113936 1892116 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5ccrjl.pay5uy3xwb94lc61 \
	I0116 02:55:26.114085 1892116 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6218d0988b2a7aa9cfeacd0df5d75f7b2af48c94d0234c3fb2bf032e099bbd3 \
	I0116 02:55:26.114111 1892116 kubeadm.go:322] 	--control-plane 
	I0116 02:55:26.114116 1892116 kubeadm.go:322] 
	I0116 02:55:26.114203 1892116 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:55:26.114210 1892116 kubeadm.go:322] 
	I0116 02:55:26.114336 1892116 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5ccrjl.pay5uy3xwb94lc61 \
	I0116 02:55:26.114500 1892116 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6218d0988b2a7aa9cfeacd0df5d75f7b2af48c94d0234c3fb2bf032e099bbd3 
	I0116 02:55:26.114536 1892116 cni.go:84] Creating CNI manager for ""
	I0116 02:55:26.114555 1892116 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:55:26.116572 1892116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:55:26.118472 1892116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:55:26.124102 1892116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:55:26.124164 1892116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:55:26.166905 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:55:27.049777 1892116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:55:27.049923 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:27.050005 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=addons-843965 minikube.k8s.io/updated_at=2024_01_16T02_55_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:27.060870 1892116 ops.go:34] apiserver oom_adj: -16
	I0116 02:55:27.230993 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:27.731158 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:28.231997 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:28.731667 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:29.231215 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:29.731159 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:30.231716 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:30.731731 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:31.231768 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:31.731178 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:32.231969 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:32.731699 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:33.231749 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:33.731177 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:34.231507 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:34.731974 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:35.231353 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:35.731232 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:36.231303 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:36.731694 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:37.231742 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:37.731778 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:38.231140 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:38.369976 1892116 kubeadm.go:1088] duration metric: took 11.320105765s to wait for elevateKubeSystemPrivileges.
	I0116 02:55:38.370017 1892116 kubeadm.go:406] StartCluster complete in 30.67210981s
	I0116 02:55:38.370036 1892116 settings.go:142] acquiring lock: {Name:mk5ef3d7839aa1301dd151a46eaf62e1b5658d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:38.370159 1892116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 02:55:38.370567 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/kubeconfig: {Name:mk03027f3f7cf4dc9d608a622efae9ada84d58d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:38.372979 1892116 config.go:182] Loaded profile config "addons-843965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:55:38.373043 1892116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:55:38.373162 1892116 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 02:55:38.373272 1892116 addons.go:69] Setting yakd=true in profile "addons-843965"
	I0116 02:55:38.373295 1892116 addons.go:234] Setting addon yakd=true in "addons-843965"
	I0116 02:55:38.373336 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.373900 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.374355 1892116 addons.go:69] Setting cloud-spanner=true in profile "addons-843965"
	I0116 02:55:38.374376 1892116 addons.go:234] Setting addon cloud-spanner=true in "addons-843965"
	I0116 02:55:38.374408 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.374818 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.375154 1892116 addons.go:69] Setting metrics-server=true in profile "addons-843965"
	I0116 02:55:38.375184 1892116 addons.go:234] Setting addon metrics-server=true in "addons-843965"
	I0116 02:55:38.375224 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.375676 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.376042 1892116 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-843965"
	I0116 02:55:38.376063 1892116 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-843965"
	I0116 02:55:38.376098 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.376485 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.381781 1892116 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-843965"
	I0116 02:55:38.381846 1892116 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-843965"
	I0116 02:55:38.381883 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.382293 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.384255 1892116 addons.go:69] Setting registry=true in profile "addons-843965"
	I0116 02:55:38.384276 1892116 addons.go:234] Setting addon registry=true in "addons-843965"
	I0116 02:55:38.384312 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.384757 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.418378 1892116 addons.go:69] Setting default-storageclass=true in profile "addons-843965"
	I0116 02:55:38.418460 1892116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-843965"
	I0116 02:55:38.418854 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.419475 1892116 addons.go:69] Setting storage-provisioner=true in profile "addons-843965"
	I0116 02:55:38.419535 1892116 addons.go:234] Setting addon storage-provisioner=true in "addons-843965"
	I0116 02:55:38.419627 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.420205 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.434754 1892116 addons.go:69] Setting gcp-auth=true in profile "addons-843965"
	I0116 02:55:38.434801 1892116 mustload.go:65] Loading cluster: addons-843965
	I0116 02:55:38.435012 1892116 config.go:182] Loaded profile config "addons-843965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:55:38.435293 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.440762 1892116 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-843965"
	I0116 02:55:38.440843 1892116 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-843965"
	I0116 02:55:38.443274 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.448645 1892116 addons.go:69] Setting ingress=true in profile "addons-843965"
	I0116 02:55:38.448720 1892116 addons.go:234] Setting addon ingress=true in "addons-843965"
	I0116 02:55:38.448809 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.449343 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.461706 1892116 addons.go:69] Setting volumesnapshots=true in profile "addons-843965"
	I0116 02:55:38.461774 1892116 addons.go:234] Setting addon volumesnapshots=true in "addons-843965"
	I0116 02:55:38.461852 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.462362 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.462555 1892116 addons.go:69] Setting ingress-dns=true in profile "addons-843965"
	I0116 02:55:38.462589 1892116 addons.go:234] Setting addon ingress-dns=true in "addons-843965"
	I0116 02:55:38.462638 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.463030 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.488672 1892116 addons.go:69] Setting inspektor-gadget=true in profile "addons-843965"
	I0116 02:55:38.488749 1892116 addons.go:234] Setting addon inspektor-gadget=true in "addons-843965"
	I0116 02:55:38.488824 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.489463 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.601547 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 02:55:38.603485 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 02:55:38.608544 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 02:55:38.612970 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 02:55:38.614998 1892116 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 02:55:38.626292 1892116 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 02:55:38.628201 1892116 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 02:55:38.628256 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 02:55:38.628346 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.635730 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 02:55:38.647090 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 02:55:38.650246 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 02:55:38.653494 1892116 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 02:55:38.653501 1892116 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 02:55:38.653507 1892116 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 02:55:38.662841 1892116 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 02:55:38.671723 1892116 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 02:55:38.671744 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 02:55:38.671811 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.669785 1892116 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:55:38.676956 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 02:55:38.677077 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.699049 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 02:55:38.699075 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 02:55:38.699139 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.701165 1892116 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 02:55:38.701193 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 02:55:38.701260 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.738490 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 02:55:38.740681 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.740855 1892116 addons.go:234] Setting addon default-storageclass=true in "addons-843965"
	I0116 02:55:38.741889 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.742387 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.742602 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 02:55:38.754044 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 02:55:38.754068 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 02:55:38.754137 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.765377 1892116 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-843965"
	I0116 02:55:38.765415 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.765897 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.743341 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 02:55:38.769676 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 02:55:38.769739 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.743374 1892116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:55:38.791722 1892116 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:55:38.791742 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:55:38.791803 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.795745 1892116 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 02:55:38.807058 1892116 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 02:55:38.813085 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 02:55:38.812981 1892116 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:55:38.815447 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 02:55:38.815529 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.815698 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 02:55:38.815784 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.867847 1892116 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 02:55:38.869966 1892116 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:55:38.872767 1892116 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:55:38.875027 1892116 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:55:38.875049 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 02:55:38.875115 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.888906 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:38.912763 1892116 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-843965" context rescaled to 1 replicas
	I0116 02:55:38.912801 1892116 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 02:55:38.915232 1892116 out.go:177] * Verifying Kubernetes components...
	I0116 02:55:38.917154 1892116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:55:38.914784 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:38.956651 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:38.990025 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:38.991064 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.017681 1892116 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:55:39.017701 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:55:39.017761 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:39.018041 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.025673 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.072329 1892116 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 02:55:39.074815 1892116 out.go:177]   - Using image docker.io/busybox:stable
	I0116 02:55:39.078835 1892116 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:55:39.078859 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 02:55:39.078923 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:39.087904 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.092052 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.107831 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.125796 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.141416 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.158438 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	W0116 02:55:39.162318 1892116 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0116 02:55:39.162350 1892116 retry.go:31] will retry after 280.069557ms: ssh: handshake failed: EOF
	I0116 02:55:39.238287 1892116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:55:39.241332 1892116 node_ready.go:35] waiting up to 6m0s for node "addons-843965" to be "Ready" ...
	I0116 02:55:39.244930 1892116 node_ready.go:49] node "addons-843965" has status "Ready":"True"
	I0116 02:55:39.244958 1892116 node_ready.go:38] duration metric: took 3.59417ms waiting for node "addons-843965" to be "Ready" ...
	I0116 02:55:39.244968 1892116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:55:39.258479 1892116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace to be "Ready" ...
	I0116 02:55:39.578183 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:55:39.723357 1892116 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 02:55:39.723427 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 02:55:39.738087 1892116 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 02:55:39.738150 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 02:55:39.748043 1892116 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 02:55:39.748114 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 02:55:39.788848 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:55:39.794914 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 02:55:39.795009 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 02:55:39.826650 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 02:55:39.840067 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 02:55:39.840130 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 02:55:39.881986 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 02:55:39.882048 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 02:55:39.906942 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:55:39.927536 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:55:39.970589 1892116 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:55:39.970658 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 02:55:40.007730 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:55:40.017853 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:55:40.019786 1892116 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 02:55:40.019876 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 02:55:40.029999 1892116 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 02:55:40.030111 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 02:55:40.122819 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 02:55:40.122894 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 02:55:40.196576 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 02:55:40.196646 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 02:55:40.223707 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 02:55:40.223784 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 02:55:40.298825 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:55:40.303329 1892116 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 02:55:40.303356 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 02:55:40.373241 1892116 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:55:40.373310 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 02:55:40.388298 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 02:55:40.388368 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 02:55:40.457471 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 02:55:40.457543 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 02:55:40.495696 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 02:55:40.495756 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 02:55:40.586208 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 02:55:40.586343 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 02:55:40.682276 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:55:40.682338 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 02:55:40.703452 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:55:40.740678 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 02:55:40.740747 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 02:55:40.851631 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 02:55:40.851690 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 02:55:40.870471 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:55:40.964407 1892116 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:55:40.964472 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 02:55:41.060993 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 02:55:41.061066 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 02:55:41.125124 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 02:55:41.125202 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 02:55:41.225263 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:55:41.264831 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:41.282777 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 02:55:41.282848 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 02:55:41.383341 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 02:55:41.383412 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 02:55:41.523664 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 02:55:41.523736 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 02:55:41.552610 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 02:55:41.552688 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 02:55:41.578519 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 02:55:41.578585 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 02:55:41.625357 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:55:41.625431 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 02:55:41.662015 1892116 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.423687364s)
	I0116 02:55:41.662105 1892116 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 02:55:41.668724 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:55:41.668792 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 02:55:41.695785 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:55:41.752697 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:55:41.819117 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.240854583s)
	I0116 02:55:43.290953 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:43.557401 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.768479601s)
	I0116 02:55:43.557536 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.730819609s)
	I0116 02:55:43.557594 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.650581142s)
	I0116 02:55:45.302807 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:45.554324 1892116 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 02:55:45.554676 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:45.603355 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:45.855239 1892116 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 02:55:45.892859 1892116 addons.go:234] Setting addon gcp-auth=true in "addons-843965"
	I0116 02:55:45.892912 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:45.893364 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:45.918428 1892116 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 02:55:45.918484 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:45.950905 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:46.520754 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.593133861s)
	I0116 02:55:46.520788 1892116 addons.go:470] Verifying addon ingress=true in "addons-843965"
	I0116 02:55:46.523950 1892116 out.go:177] * Verifying ingress addon...
	I0116 02:55:46.520982 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.513183313s)
	I0116 02:55:46.521098 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.503174507s)
	I0116 02:55:46.521134 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.222278022s)
	I0116 02:55:46.521212 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.817689553s)
	I0116 02:55:46.521332 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.295972177s)
	I0116 02:55:46.521344 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.650715796s)
	I0116 02:55:46.526743 1892116 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 02:55:46.526967 1892116 addons.go:470] Verifying addon metrics-server=true in "addons-843965"
	I0116 02:55:46.526993 1892116 addons.go:470] Verifying addon registry=true in "addons-843965"
	I0116 02:55:46.528648 1892116 out.go:177] * Verifying registry addon...
	W0116 02:55:46.527127 1892116 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:55:46.530591 1892116 retry.go:31] will retry after 276.639829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:55:46.531398 1892116 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 02:55:46.531575 1892116 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-843965 service yakd-dashboard -n yakd-dashboard
	
	I0116 02:55:46.542871 1892116 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 02:55:46.542898 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0116 02:55:46.545552 1892116 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0116 02:55:46.548622 1892116 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 02:55:46.548643 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:46.807414 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:55:47.031541 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:47.042018 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:47.305980 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:47.545955 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:47.547119 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:47.988542 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.292664578s)
	I0116 02:55:47.988623 1892116 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-843965"
	I0116 02:55:47.991118 1892116 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 02:55:47.988870 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.236092511s)
	I0116 02:55:47.988908 1892116 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.070460748s)
	I0116 02:55:47.995495 1892116 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:55:47.994280 1892116 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 02:55:47.999947 1892116 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 02:55:48.003669 1892116 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 02:55:48.003753 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 02:55:48.014111 1892116 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 02:55:48.014140 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:48.031677 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:48.039612 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:48.071040 1892116 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 02:55:48.071104 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 02:55:48.148108 1892116 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:55:48.148180 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 02:55:48.193731 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:55:48.503743 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:48.531211 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:48.536914 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:48.838194 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.030719354s)
	I0116 02:55:49.003608 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:49.032053 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:49.036638 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:49.295224 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.101444905s)
	I0116 02:55:49.298081 1892116 addons.go:470] Verifying addon gcp-auth=true in "addons-843965"
	I0116 02:55:49.300721 1892116 out.go:177] * Verifying gcp-auth addon...
	I0116 02:55:49.303740 1892116 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 02:55:49.320008 1892116 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 02:55:49.320034 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:49.503316 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:49.532151 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:49.536788 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:49.765485 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:49.808330 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:50.005352 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:50.031445 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:50.036514 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:50.308383 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:50.503235 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:50.532626 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:50.538408 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:50.808398 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:51.003196 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:51.031928 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:51.036386 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:51.307916 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:51.503934 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:51.531618 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:51.536130 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:51.807773 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:52.008334 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:52.032182 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:52.036916 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:52.266410 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:52.312469 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:52.505386 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:52.532179 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:52.537046 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:52.809595 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:53.003562 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:53.033057 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:53.038139 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:53.307372 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:53.503667 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:53.532071 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:53.537019 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:53.808248 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:54.004455 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:54.031715 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:54.036878 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:54.270937 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:54.307789 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:54.504041 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:54.532463 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:54.537903 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:54.807802 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:55.004322 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:55.042284 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:55.043533 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:55.311980 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:55.503533 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:55.531380 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:55.535789 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:55.807614 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:56.008664 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:56.032546 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:56.038995 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:56.308020 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:56.502952 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:56.531474 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:56.536537 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:56.766842 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:56.807399 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:57.004713 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:57.031557 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:57.035725 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:57.308600 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:57.503326 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:57.532279 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:57.537287 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:57.808274 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:58.006356 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:58.032150 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:58.036471 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:58.307764 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:58.503959 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:58.531595 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:58.536238 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:58.807603 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:59.003755 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:59.031410 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:59.039727 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:59.267530 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:59.308011 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:59.503557 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:59.531717 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:59.536557 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:59.807576 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:00.009781 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:00.050172 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:00.051023 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:00.307967 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:00.503142 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:00.531114 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:00.536344 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:00.807884 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:01.003941 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:01.032042 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:01.036681 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:01.307696 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:01.503291 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:01.531894 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:01.536336 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:01.765603 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:01.808389 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:02.004544 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:02.032280 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:02.037211 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:02.308101 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:02.503877 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:02.531172 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:02.536467 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:02.808091 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:03.003462 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:03.031946 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:03.036502 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:03.307219 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:03.503310 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:03.531337 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:03.536837 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:03.765841 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:03.808226 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:04.004480 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:04.031288 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:04.036806 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:04.308238 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:04.503603 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:04.531769 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:04.536286 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:04.807785 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:05.004163 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:05.032212 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:05.036501 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:05.307931 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:05.503421 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:05.531766 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:05.535830 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:05.808181 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:06.011199 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:06.031763 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:06.036773 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:06.274551 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:06.307963 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:06.502990 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:06.531239 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:06.537160 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:06.807831 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:07.003629 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:07.032090 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:07.036301 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:07.308182 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:07.502649 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:07.544560 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:07.545518 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:07.807106 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:08.004979 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:08.031418 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:08.035853 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:08.307621 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:08.502939 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:08.531384 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:08.535800 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:08.767255 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:08.808022 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:09.004204 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:09.031338 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:09.035943 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:09.308322 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:09.503556 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:09.531646 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:09.535788 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:09.807932 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:10.004540 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:10.033206 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:10.045405 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:10.308084 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:10.503349 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:10.531302 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:10.536664 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:10.807590 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:11.003791 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:11.031253 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:11.036599 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:11.265107 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:11.308341 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:11.503739 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:11.532128 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:11.536614 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:11.807286 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:12.020799 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:12.034936 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:12.043586 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:12.316444 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:12.502858 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:12.532291 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:12.539740 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:12.807486 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:13.004307 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:13.032349 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:13.037490 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:13.265227 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:13.308051 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:13.503835 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:13.531524 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:13.536139 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:13.807979 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:14.004774 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:14.031534 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:14.036741 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:14.308939 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:14.504559 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:14.533951 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:14.537528 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:14.807920 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:15.003848 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:15.032824 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:15.037374 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:15.308255 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:15.503030 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:15.531238 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:15.536882 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:15.771282 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:15.807861 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:16.003716 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:16.032136 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:16.037053 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:16.308588 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:16.506950 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:16.533353 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:16.538411 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:16.808276 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:17.003902 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:17.032105 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:17.038059 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:17.307848 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:17.503161 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:17.531965 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:17.536547 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:17.807294 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:18.004359 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:18.032368 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:18.037586 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:18.265789 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:18.307492 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:18.503002 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:18.534088 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:18.537527 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:18.808217 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:19.003574 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:19.032063 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:19.039880 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:19.308151 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:19.508801 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:19.531145 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:19.536565 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:19.807126 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:20.004301 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:20.031942 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:20.036378 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:20.307776 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:20.505655 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:20.531646 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:20.535984 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:20.765717 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:20.808085 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:21.003758 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:21.031706 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:21.036413 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:21.307410 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:21.503547 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:21.531882 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:21.536016 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:21.765042 1892116 pod_ready.go:92] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.765069 1892116 pod_ready.go:81] duration metric: took 42.506558132s waiting for pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.765081 1892116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v67m8" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.767124 1892116 pod_ready.go:97] error getting pod "coredns-5dd5756b68-v67m8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-v67m8" not found
	I0116 02:56:21.767151 1892116 pod_ready.go:81] duration metric: took 2.063826ms waiting for pod "coredns-5dd5756b68-v67m8" in "kube-system" namespace to be "Ready" ...
	E0116 02:56:21.767162 1892116 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-v67m8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-v67m8" not found
	I0116 02:56:21.767168 1892116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.772325 1892116 pod_ready.go:92] pod "etcd-addons-843965" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.772344 1892116 pod_ready.go:81] duration metric: took 5.16846ms waiting for pod "etcd-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.772356 1892116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.777796 1892116 pod_ready.go:92] pod "kube-apiserver-addons-843965" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.777819 1892116 pod_ready.go:81] duration metric: took 5.455221ms waiting for pod "kube-apiserver-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.777829 1892116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.783266 1892116 pod_ready.go:92] pod "kube-controller-manager-addons-843965" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.783287 1892116 pod_ready.go:81] duration metric: took 5.449953ms waiting for pod "kube-controller-manager-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.783299 1892116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shxz5" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.807808 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:21.962643 1892116 pod_ready.go:92] pod "kube-proxy-shxz5" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.962667 1892116 pod_ready.go:81] duration metric: took 179.361184ms waiting for pod "kube-proxy-shxz5" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.962679 1892116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:22.004210 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:22.032101 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:22.036546 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:22.310413 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:22.364012 1892116 pod_ready.go:92] pod "kube-scheduler-addons-843965" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:22.364049 1892116 pod_ready.go:81] duration metric: took 401.354806ms waiting for pod "kube-scheduler-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:22.364060 1892116 pod_ready.go:38] duration metric: took 43.119051002s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:56:22.364074 1892116 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:56:22.364153 1892116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:56:22.396113 1892116 api_server.go:72] duration metric: took 43.483283408s to wait for apiserver process to appear ...
	I0116 02:56:22.396142 1892116 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:56:22.396163 1892116 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 02:56:22.405784 1892116 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 02:56:22.407560 1892116 api_server.go:141] control plane version: v1.28.4
	I0116 02:56:22.407581 1892116 api_server.go:131] duration metric: took 11.432092ms to wait for apiserver health ...
	I0116 02:56:22.407590 1892116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:56:22.504296 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:22.532314 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:22.537575 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:22.572609 1892116 system_pods.go:59] 18 kube-system pods found
	I0116 02:56:22.572687 1892116 system_pods.go:61] "coredns-5dd5756b68-drb7k" [5d711312-1d08-44bc-a927-acb57c46dde3] Running
	I0116 02:56:22.572713 1892116 system_pods.go:61] "csi-hostpath-attacher-0" [b33859d8-a06e-47aa-9e5b-b1fa3361b6ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 02:56:22.572734 1892116 system_pods.go:61] "csi-hostpath-resizer-0" [7816225b-e3b5-4636-ae31-ce0ab725df08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0116 02:56:22.572772 1892116 system_pods.go:61] "csi-hostpathplugin-67k8j" [25de6a1f-7131-4fb2-b2c1-6c456d1dcccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:56:22.572800 1892116 system_pods.go:61] "etcd-addons-843965" [7ba09c04-5376-4351-b2fb-069ebaebc3fa] Running
	I0116 02:56:22.572820 1892116 system_pods.go:61] "kindnet-p7psr" [8b6ba9f1-d3da-4a60-a6ce-8dfda33792b7] Running
	I0116 02:56:22.572837 1892116 system_pods.go:61] "kube-apiserver-addons-843965" [33d1e64e-6d09-43a9-9f8e-70a333257907] Running
	I0116 02:56:22.572853 1892116 system_pods.go:61] "kube-controller-manager-addons-843965" [4fbb42f1-b416-4339-a359-8a97f6589e8d] Running
	I0116 02:56:22.572870 1892116 system_pods.go:61] "kube-ingress-dns-minikube" [8122a341-637a-43d1-99b4-4f74cfcb03f0] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 02:56:22.572892 1892116 system_pods.go:61] "kube-proxy-shxz5" [66275fc0-354e-4fbc-b31e-44770af0e751] Running
	I0116 02:56:22.572911 1892116 system_pods.go:61] "kube-scheduler-addons-843965" [befad0b9-f2ab-4a9d-a74b-8968f4d8d4c9] Running
	I0116 02:56:22.572931 1892116 system_pods.go:61] "metrics-server-7c66d45ddc-cshtq" [e2de00b3-dd3e-4347-a94f-b186d7fe0fea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 02:56:22.572948 1892116 system_pods.go:61] "nvidia-device-plugin-daemonset-zlmrk" [da7bd62d-e415-4145-ad12-6feb7be5fe21] Running
	I0116 02:56:22.572963 1892116 system_pods.go:61] "registry-bzgv9" [af30b04d-da1d-4148-b183-4ca8c48dba30] Running
	I0116 02:56:22.572979 1892116 system_pods.go:61] "registry-proxy-sfv97" [224d6c6a-4fbd-415b-92b0-562bdde1b323] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 02:56:22.572999 1892116 system_pods.go:61] "snapshot-controller-58dbcc7b99-kblzs" [6694f41d-1dee-45de-b020-072a9a790144] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:56:22.573017 1892116 system_pods.go:61] "snapshot-controller-58dbcc7b99-vtcnr" [7f950a4d-15d7-43fa-98d2-fec43d16eab9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:56:22.573033 1892116 system_pods.go:61] "storage-provisioner" [253b2950-26ab-4c7d-ae43-bc75f6fd3e61] Running
	I0116 02:56:22.573051 1892116 system_pods.go:74] duration metric: took 165.455302ms to wait for pod list to return data ...
	I0116 02:56:22.573078 1892116 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:56:22.762937 1892116 default_sa.go:45] found service account: "default"
	I0116 02:56:22.763004 1892116 default_sa.go:55] duration metric: took 189.907787ms for default service account to be created ...
	I0116 02:56:22.763028 1892116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:56:22.807812 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:22.971498 1892116 system_pods.go:86] 18 kube-system pods found
	I0116 02:56:22.971572 1892116 system_pods.go:89] "coredns-5dd5756b68-drb7k" [5d711312-1d08-44bc-a927-acb57c46dde3] Running
	I0116 02:56:22.971596 1892116 system_pods.go:89] "csi-hostpath-attacher-0" [b33859d8-a06e-47aa-9e5b-b1fa3361b6ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 02:56:22.971615 1892116 system_pods.go:89] "csi-hostpath-resizer-0" [7816225b-e3b5-4636-ae31-ce0ab725df08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0116 02:56:22.971651 1892116 system_pods.go:89] "csi-hostpathplugin-67k8j" [25de6a1f-7131-4fb2-b2c1-6c456d1dcccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:56:22.971673 1892116 system_pods.go:89] "etcd-addons-843965" [7ba09c04-5376-4351-b2fb-069ebaebc3fa] Running
	I0116 02:56:22.971690 1892116 system_pods.go:89] "kindnet-p7psr" [8b6ba9f1-d3da-4a60-a6ce-8dfda33792b7] Running
	I0116 02:56:22.971706 1892116 system_pods.go:89] "kube-apiserver-addons-843965" [33d1e64e-6d09-43a9-9f8e-70a333257907] Running
	I0116 02:56:22.971720 1892116 system_pods.go:89] "kube-controller-manager-addons-843965" [4fbb42f1-b416-4339-a359-8a97f6589e8d] Running
	I0116 02:56:22.971750 1892116 system_pods.go:89] "kube-ingress-dns-minikube" [8122a341-637a-43d1-99b4-4f74cfcb03f0] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 02:56:22.971772 1892116 system_pods.go:89] "kube-proxy-shxz5" [66275fc0-354e-4fbc-b31e-44770af0e751] Running
	I0116 02:56:22.971790 1892116 system_pods.go:89] "kube-scheduler-addons-843965" [befad0b9-f2ab-4a9d-a74b-8968f4d8d4c9] Running
	I0116 02:56:22.971809 1892116 system_pods.go:89] "metrics-server-7c66d45ddc-cshtq" [e2de00b3-dd3e-4347-a94f-b186d7fe0fea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 02:56:22.971825 1892116 system_pods.go:89] "nvidia-device-plugin-daemonset-zlmrk" [da7bd62d-e415-4145-ad12-6feb7be5fe21] Running
	I0116 02:56:22.971851 1892116 system_pods.go:89] "registry-bzgv9" [af30b04d-da1d-4148-b183-4ca8c48dba30] Running
	I0116 02:56:22.971876 1892116 system_pods.go:89] "registry-proxy-sfv97" [224d6c6a-4fbd-415b-92b0-562bdde1b323] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 02:56:22.971898 1892116 system_pods.go:89] "snapshot-controller-58dbcc7b99-kblzs" [6694f41d-1dee-45de-b020-072a9a790144] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:56:22.971917 1892116 system_pods.go:89] "snapshot-controller-58dbcc7b99-vtcnr" [7f950a4d-15d7-43fa-98d2-fec43d16eab9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:56:22.971931 1892116 system_pods.go:89] "storage-provisioner" [253b2950-26ab-4c7d-ae43-bc75f6fd3e61] Running
	I0116 02:56:22.971959 1892116 system_pods.go:126] duration metric: took 208.913176ms to wait for k8s-apps to be running ...
	I0116 02:56:22.971983 1892116 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:56:22.972061 1892116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:56:22.988541 1892116 system_svc.go:56] duration metric: took 16.549968ms WaitForService to wait for kubelet.
	I0116 02:56:22.988612 1892116 kubeadm.go:581] duration metric: took 44.075787238s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:56:22.988647 1892116 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:56:23.004011 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:23.032432 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:23.036559 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:23.163347 1892116 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 02:56:23.163424 1892116 node_conditions.go:123] node cpu capacity is 2
	I0116 02:56:23.163450 1892116 node_conditions.go:105] duration metric: took 174.786505ms to run NodePressure ...
	I0116 02:56:23.163473 1892116 start.go:228] waiting for startup goroutines ...
	I0116 02:56:23.308514 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:23.503305 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:23.534065 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:23.537552 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:23.807523 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:24.010218 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:24.034383 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:24.038011 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:24.311673 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:24.507898 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:24.532487 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:24.536231 1892116 kapi.go:107] duration metric: took 38.004828742s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 02:56:24.807881 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:25.003305 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:25.031707 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:25.308429 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:25.503624 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:25.532222 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:25.808110 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:26.005779 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:26.032369 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:26.308738 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:26.503150 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:26.531416 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:26.810271 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:27.004222 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:27.032144 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:27.308178 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:27.504134 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:27.531907 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:27.807714 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:28.005048 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:28.032162 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:28.308545 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:28.507871 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:28.534406 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:28.807670 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:29.004522 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:29.033411 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:29.308397 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:29.503569 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:29.531727 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:29.807065 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:30.003721 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:30.032549 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:30.307820 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:30.503873 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:30.532916 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:30.810334 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:31.004901 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:31.032321 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:31.308063 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:31.503703 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:31.532288 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:31.807999 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:32.006154 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:32.031357 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:32.308606 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:32.505347 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:32.532164 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:32.808035 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:33.004191 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:33.031599 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:33.307548 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:33.504467 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:33.532875 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:33.807885 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:34.005157 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:34.031835 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:34.307950 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:34.507069 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:34.532278 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:34.808240 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:35.003674 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:35.031450 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:35.308292 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:35.503828 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:35.537520 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:35.810999 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:36.005009 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:36.031949 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:36.307968 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:36.503613 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:36.531609 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:36.807389 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:37.003919 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:37.031833 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:37.307530 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:37.502806 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:37.532038 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:37.807666 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:38.004182 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:38.031685 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:38.307477 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:38.505857 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:38.533980 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:38.808114 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:39.007792 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:39.039518 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:39.308148 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:39.504532 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:39.532859 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:39.814582 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:40.005856 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:40.037687 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:40.308042 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:40.503538 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:40.533190 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:40.808171 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:41.004456 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:41.034139 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:41.308875 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:41.503998 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:41.532349 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:41.808591 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:42.005070 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:42.033809 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:42.309046 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:42.507861 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:42.532264 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:42.812503 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:43.004573 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:43.031360 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:43.308395 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:43.502907 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:43.531851 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:43.807466 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:44.003926 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:44.031453 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:44.307475 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:44.503051 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:44.531044 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:44.807681 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:45.007608 1892116 kapi.go:107] duration metric: took 57.013321783s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 02:56:45.032242 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:45.308282 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:45.531879 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:45.807543 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:46.031551 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:46.307421 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:46.531980 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:46.807947 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:47.032313 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:47.308095 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:47.531867 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:47.807393 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:48.032058 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:48.307985 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:48.531614 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:48.807151 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:49.032183 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:49.308251 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:49.532010 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:49.807790 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:50.031774 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:50.307602 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:50.530929 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:50.807698 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:51.031415 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:51.307261 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:51.532479 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:51.808707 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:52.031638 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:52.319027 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:52.535123 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:52.808269 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:53.032021 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:53.308421 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:53.532097 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:53.807789 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:54.032373 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:54.324999 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:54.532713 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:54.807818 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:55.032366 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:55.311667 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:55.531642 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:55.807040 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:56.031823 1892116 kapi.go:107] duration metric: took 1m9.505074163s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 02:56:56.311089 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:56.809976 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:57.307551 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:57.807216 1892116 kapi.go:107] duration metric: took 1m8.503474131s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 02:56:57.809429 1892116 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-843965 cluster.
	I0116 02:56:57.811509 1892116 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 02:56:57.813216 1892116 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 02:56:57.815224 1892116 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0116 02:56:57.817099 1892116 addons.go:505] enable addons completed in 1m19.443961092s: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0116 02:56:57.817137 1892116 start.go:233] waiting for cluster config update ...
	I0116 02:56:57.817169 1892116 start.go:242] writing updated cluster config ...
	I0116 02:56:57.817530 1892116 ssh_runner.go:195] Run: rm -f paused
	I0116 02:56:58.157405 1892116 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:56:58.159259 1892116 out.go:177] * Done! kubectl is now configured to use "addons-843965" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a12473b576c84       dd1b12fcb6097       8 seconds ago       Exited              hello-world-app           2                   da2f10c9af62e       hello-world-app-5d77478584-xkxj7
	f4bc3263d3a9d       74077e780ec71       32 seconds ago      Running             nginx                     0                   2519b07ddf6c2       nginx
	0d34bf46c828e       21648f71be814       39 seconds ago      Running             headlamp                  0                   65c598bc4931f       headlamp-7ddfbb94ff-q6vng
	43a9d9f2634b7       2a5f29343eb03       2 minutes ago       Running             gcp-auth                  0                   a4ccda24f38d7       gcp-auth-d4c87556c-2m7mf
	b915a048f4ada       af594c6a879f2       2 minutes ago       Exited              patch                     2                   57f7ea89952f1       ingress-nginx-admission-patch-75m88
	71838e965dea9       af594c6a879f2       2 minutes ago       Exited              create                    0                   2aefb233d14d0       ingress-nginx-admission-create-r6qf8
	86fb839a1af30       20e3f2db01e81       2 minutes ago       Running             yakd                      0                   940980a2d7bb2       yakd-dashboard-9947fc6bf-ccdsg
	819369e220a77       97e04611ad434       2 minutes ago       Running             coredns                   0                   f7213821037ce       coredns-5dd5756b68-drb7k
	4b7c69e163454       a89778274bf53       2 minutes ago       Running             cloud-spanner-emulator    0                   b4f3fa1ceabee       cloud-spanner-emulator-64c8c85f65-rw7hq
	52cc6edb069f5       ba04bb24b9575       3 minutes ago       Running             storage-provisioner       0                   5344b6e3d6abc       storage-provisioner
	653b92beb0f55       3ca3ca488cf13       3 minutes ago       Running             kube-proxy                0                   90409322275c0       kube-proxy-shxz5
	cad3dfa1ad9e7       04b4eaa3d3db8       3 minutes ago       Running             kindnet-cni               0                   01cb52cc88d4f       kindnet-p7psr
	51c33b06e0ddb       04b4c447bb9d4       3 minutes ago       Running             kube-apiserver            0                   a8c4e5ba61743       kube-apiserver-addons-843965
	ce7400afe9ca1       9cdd6470f48c8       3 minutes ago       Running             etcd                      0                   9e31239605ff1       etcd-addons-843965
	79153b07155cf       05c284c929889       3 minutes ago       Running             kube-scheduler            0                   391282c0e098f       kube-scheduler-addons-843965
	7d7aa230689f6       9961cbceaf234       3 minutes ago       Running             kube-controller-manager   0                   c9673fca8da20       kube-controller-manager-addons-843965
	
	
	==> containerd <==
	Jan 16 02:59:02 addons-843965 containerd[743]: time="2024-01-16T02:59:02.167878506Z" level=info msg="shim disconnected" id=a12473b576c849459c101313d8525a75d4d588043b2474fa75dbad53d0535652
	Jan 16 02:59:02 addons-843965 containerd[743]: time="2024-01-16T02:59:02.167937540Z" level=warning msg="cleaning up after shim disconnected" id=a12473b576c849459c101313d8525a75d4d588043b2474fa75dbad53d0535652 namespace=k8s.io
	Jan 16 02:59:02 addons-843965 containerd[743]: time="2024-01-16T02:59:02.167949773Z" level=info msg="cleaning up dead shim"
	Jan 16 02:59:02 addons-843965 containerd[743]: time="2024-01-16T02:59:02.188971955Z" level=warning msg="cleanup warnings time=\"2024-01-16T02:59:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11430 runtime=io.containerd.runc.v2\ntime=\"2024-01-16T02:59:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
	Jan 16 02:59:02 addons-843965 containerd[743]: time="2024-01-16T02:59:02.900458693Z" level=info msg="StopContainer for \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\" with timeout 2 (s)"
	Jan 16 02:59:02 addons-843965 containerd[743]: time="2024-01-16T02:59:02.900940476Z" level=info msg="Stop container \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\" with signal terminated"
	Jan 16 02:59:03 addons-843965 containerd[743]: time="2024-01-16T02:59:03.189714443Z" level=info msg="RemoveContainer for \"57ee0cc7b4e5f9ae6fd316877f14a52bf436b46725202511ae8d851054cb5dae\""
	Jan 16 02:59:03 addons-843965 containerd[743]: time="2024-01-16T02:59:03.197658888Z" level=info msg="RemoveContainer for \"57ee0cc7b4e5f9ae6fd316877f14a52bf436b46725202511ae8d851054cb5dae\" returns successfully"
	Jan 16 02:59:04 addons-843965 containerd[743]: time="2024-01-16T02:59:04.908444596Z" level=info msg="Kill container \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\""
	Jan 16 02:59:04 addons-843965 containerd[743]: time="2024-01-16T02:59:04.981181762Z" level=info msg="shim disconnected" id=14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02
	Jan 16 02:59:04 addons-843965 containerd[743]: time="2024-01-16T02:59:04.981250601Z" level=warning msg="cleaning up after shim disconnected" id=14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02 namespace=k8s.io
	Jan 16 02:59:04 addons-843965 containerd[743]: time="2024-01-16T02:59:04.981263926Z" level=info msg="cleaning up dead shim"
	Jan 16 02:59:04 addons-843965 containerd[743]: time="2024-01-16T02:59:04.991783484Z" level=warning msg="cleanup warnings time=\"2024-01-16T02:59:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11473 runtime=io.containerd.runc.v2\n"
	Jan 16 02:59:04 addons-843965 containerd[743]: time="2024-01-16T02:59:04.994778549Z" level=info msg="StopContainer for \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\" returns successfully"
	Jan 16 02:59:04 addons-843965 containerd[743]: time="2024-01-16T02:59:04.995391594Z" level=info msg="StopPodSandbox for \"85d55e5a9fdaef1c88015b0d6036c9ea3ee54e7a21286af45dcfc50dbda4c717\""
	Jan 16 02:59:04 addons-843965 containerd[743]: time="2024-01-16T02:59:04.995463961Z" level=info msg="Container to stop \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.032770532Z" level=info msg="shim disconnected" id=85d55e5a9fdaef1c88015b0d6036c9ea3ee54e7a21286af45dcfc50dbda4c717
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.032978567Z" level=warning msg="cleaning up after shim disconnected" id=85d55e5a9fdaef1c88015b0d6036c9ea3ee54e7a21286af45dcfc50dbda4c717 namespace=k8s.io
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.033050269Z" level=info msg="cleaning up dead shim"
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.044064410Z" level=warning msg="cleanup warnings time=\"2024-01-16T02:59:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11502 runtime=io.containerd.runc.v2\n"
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.090459753Z" level=info msg="TearDown network for sandbox \"85d55e5a9fdaef1c88015b0d6036c9ea3ee54e7a21286af45dcfc50dbda4c717\" successfully"
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.090507383Z" level=info msg="StopPodSandbox for \"85d55e5a9fdaef1c88015b0d6036c9ea3ee54e7a21286af45dcfc50dbda4c717\" returns successfully"
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.197582147Z" level=info msg="RemoveContainer for \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\""
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.202754831Z" level=info msg="RemoveContainer for \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\" returns successfully"
	Jan 16 02:59:05 addons-843965 containerd[743]: time="2024-01-16T02:59:05.203410385Z" level=error msg="ContainerStatus for \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\": not found"
	
	
	==> coredns [819369e220a77836381659af98ccbc1c8ba4bdb0605819cc2b3e988ad6c2c214] <==
	[INFO] 10.244.0.19:58278 - 14290 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050533s
	[INFO] 10.244.0.19:58278 - 17361 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00178138s
	[INFO] 10.244.0.19:52493 - 51305 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002095259s
	[INFO] 10.244.0.19:52493 - 34406 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001762485s
	[INFO] 10.244.0.19:58278 - 43565 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001972087s
	[INFO] 10.244.0.19:58278 - 60764 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000164828s
	[INFO] 10.244.0.19:52493 - 58165 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000112046s
	[INFO] 10.244.0.19:40676 - 35073 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050821s
	[INFO] 10.244.0.19:54069 - 43721 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000213172s
	[INFO] 10.244.0.19:40676 - 28836 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067724s
	[INFO] 10.244.0.19:40676 - 58231 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006065s
	[INFO] 10.244.0.19:40676 - 51006 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000120119s
	[INFO] 10.244.0.19:40676 - 14650 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055908s
	[INFO] 10.244.0.19:40676 - 4428 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059026s
	[INFO] 10.244.0.19:54069 - 58582 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091853s
	[INFO] 10.244.0.19:40676 - 46483 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001491058s
	[INFO] 10.244.0.19:54069 - 18685 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000463s
	[INFO] 10.244.0.19:54069 - 29393 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078636s
	[INFO] 10.244.0.19:54069 - 5712 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074115s
	[INFO] 10.244.0.19:40676 - 27380 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001299327s
	[INFO] 10.244.0.19:54069 - 2587 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064482s
	[INFO] 10.244.0.19:40676 - 62695 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000106729s
	[INFO] 10.244.0.19:54069 - 49065 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001045082s
	[INFO] 10.244.0.19:54069 - 45206 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001204544s
	[INFO] 10.244.0.19:54069 - 28442 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000092641s
	
	
	==> describe nodes <==
	Name:               addons-843965
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-843965
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=addons-843965
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_55_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-843965
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:55:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-843965
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:59:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:59:00 +0000   Tue, 16 Jan 2024 02:55:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:59:00 +0000   Tue, 16 Jan 2024 02:55:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:59:00 +0000   Tue, 16 Jan 2024 02:55:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:59:00 +0000   Tue, 16 Jan 2024 02:55:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-843965
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 394a8448e81541d28502617e7d0fe1a2
	  System UUID:                6654ebef-dec4-4c43-b98e-a42300b6aa2b
	  Boot ID:                    db337b58-1f53-411c-9ff2-b8ff3dd0911c
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-rw7hq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  default                     hello-world-app-5d77478584-xkxj7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-d4c87556c-2m7mf                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  headlamp                    headlamp-7ddfbb94ff-q6vng                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 coredns-5dd5756b68-drb7k                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m32s
	  kube-system                 etcd-addons-843965                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m44s
	  kube-system                 kindnet-p7psr                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m32s
	  kube-system                 kube-apiserver-addons-843965               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-controller-manager-addons-843965      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-shxz5                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-scheduler-addons-843965               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-ccdsg             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     3m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m30s  kube-proxy       
	  Normal  Starting                 3m44s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s  kubelet          Node addons-843965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s  kubelet          Node addons-843965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s  kubelet          Node addons-843965 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m44s  kubelet          Node addons-843965 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m44s  kubelet          Node addons-843965 status is now: NodeReady
	  Normal  RegisteredNode           3m33s  node-controller  Node addons-843965 event: Registered Node addons-843965 in Controller
	
	
	==> dmesg <==
	[  +0.001077] FS-Cache: O-key=[8] '44dac90000000000'
	[  +0.000784] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000986] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000001afef535
	[  +0.001069] FS-Cache: N-key=[8] '44dac90000000000'
	[  +0.002505] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=00000071 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001002] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=000000002d197214
	[  +0.001094] FS-Cache: O-key=[8] '44dac90000000000'
	[  +0.000798] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001128] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000008d08245a
	[  +0.001092] FS-Cache: N-key=[8] '44dac90000000000'
	[  +2.139529] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=00000000136c64d0
	[  +0.001228] FS-Cache: O-key=[8] '43dac90000000000'
	[  +0.000720] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000007a0384fc
	[  +0.001092] FS-Cache: N-key=[8] '43dac90000000000'
	[  +0.318695] FS-Cache: Duplicate cookie detected
	[  +0.000817] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001202] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=00000000d8eb70b5
	[  +0.001211] FS-Cache: O-key=[8] '49dac90000000000'
	[  +0.000849] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.001037] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000001afef535
	[  +0.001307] FS-Cache: N-key=[8] '49dac90000000000'
	
	
	==> etcd [ce7400afe9ca1bff290931ece2139aa0de0fa0a1da85e8e691fdc4b690da7d05] <==
	{"level":"info","ts":"2024-01-16T02:55:19.094578Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T02:55:19.094593Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-16T02:55:19.095157Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-16T02:55:19.094026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-16T02:55:19.095493Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-16T02:55:19.096236Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T02:55:19.096363Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T02:55:19.929475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T02:55:19.929708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T02:55:19.929817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-16T02:55:19.930008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T02:55:19.930126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-16T02:55:19.930213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-16T02:55:19.930294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-16T02:55:19.933577Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:55:19.937681Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-843965 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T02:55:19.937865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:55:19.938975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T02:55:19.939357Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:55:19.940334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-16T02:55:19.951508Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:55:19.954622Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:55:19.954796Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:55:19.993511Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T02:55:19.993706Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [43a9d9f2634b7279fd0821a0cae509a696927fd7ccd31796cbc80e9bdcd4e301] <==
	2024/01/16 02:56:57 GCP Auth Webhook started!
	2024/01/16 02:57:09 Ready to marshal response ...
	2024/01/16 02:57:09 Ready to write response ...
	2024/01/16 02:57:24 Ready to marshal response ...
	2024/01/16 02:57:24 Ready to write response ...
	2024/01/16 02:57:25 Ready to marshal response ...
	2024/01/16 02:57:25 Ready to write response ...
	2024/01/16 02:57:32 Ready to marshal response ...
	2024/01/16 02:57:32 Ready to write response ...
	2024/01/16 02:57:47 Ready to marshal response ...
	2024/01/16 02:57:47 Ready to write response ...
	2024/01/16 02:58:11 Ready to marshal response ...
	2024/01/16 02:58:11 Ready to write response ...
	2024/01/16 02:58:26 Ready to marshal response ...
	2024/01/16 02:58:26 Ready to write response ...
	2024/01/16 02:58:26 Ready to marshal response ...
	2024/01/16 02:58:26 Ready to write response ...
	2024/01/16 02:58:26 Ready to marshal response ...
	2024/01/16 02:58:26 Ready to write response ...
	2024/01/16 02:58:35 Ready to marshal response ...
	2024/01/16 02:58:35 Ready to write response ...
	2024/01/16 02:58:44 Ready to marshal response ...
	2024/01/16 02:58:44 Ready to write response ...
	
	
	==> kernel <==
	 02:59:10 up  9:41,  0 users,  load average: 1.72, 1.77, 2.09
	Linux addons-843965 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [cad3dfa1ad9e704d8beed303439c3b4ab3b0ba0d46fa9b4768d8e3deeb2aea88] <==
	I0116 02:57:09.838407       1 main.go:227] handling current node
	I0116 02:57:19.849253       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:19.849282       1 main.go:227] handling current node
	I0116 02:57:29.861150       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:29.861183       1 main.go:227] handling current node
	I0116 02:57:39.871472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:39.871499       1 main.go:227] handling current node
	I0116 02:57:49.884452       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:49.884476       1 main.go:227] handling current node
	I0116 02:57:59.898291       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:59.898412       1 main.go:227] handling current node
	I0116 02:58:09.902294       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:58:09.902323       1 main.go:227] handling current node
	I0116 02:58:19.914891       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:58:19.914919       1 main.go:227] handling current node
	I0116 02:58:29.918941       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:58:29.918970       1 main.go:227] handling current node
	I0116 02:58:39.929978       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:58:39.930006       1 main.go:227] handling current node
	I0116 02:58:49.937977       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:58:49.938005       1 main.go:227] handling current node
	I0116 02:58:59.949747       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:58:59.949775       1 main.go:227] handling current node
	I0116 02:59:09.961397       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:59:09.961427       1 main.go:227] handling current node
	
	
	==> kube-apiserver [51c33b06e0ddb509cf60ffeb56a310ca8f81bb4fccf327d00b9cd387c3c34398] <==
	I0116 02:58:27.870644       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:58:27.871781       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:58:27.877334       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:58:27.877861       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:58:27.890023       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:58:27.890080       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:58:27.898587       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:58:27.898885       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:58:27.914902       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:58:27.915181       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:58:27.926261       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:58:27.926321       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:58:27.933375       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:58:27.933685       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0116 02:58:28.915765       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0116 02:58:28.934776       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0116 02:58:28.950549       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0116 02:58:35.306229       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0116 02:58:35.564229       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.112.54"}
	I0116 02:58:41.924371       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0116 02:58:41.931325       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0116 02:58:42.947614       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0116 02:58:44.298507       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.115.175"}
	I0116 02:58:50.196582       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0116 02:59:01.961090       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [7d7aa230689f6e65489805a395467f62456ad976f34a28bdf34d0c0011948874] <==
	W0116 02:58:47.630889       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:58:47.630924       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 02:58:48.144115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="121.383µs"
	W0116 02:58:50.385128       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:58:50.385168       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 02:58:52.053848       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0116 02:58:58.989984       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:58:58.990020       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:58:59.087851       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:58:59.087889       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 02:59:01.887383       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="4.57µs"
	I0116 02:59:01.890983       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0116 02:59:01.912343       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0116 02:59:02.198405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.717079ms"
	I0116 02:59:02.198967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="30.957µs"
	I0116 02:59:03.212365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="8.389133ms"
	I0116 02:59:03.212440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.108µs"
	W0116 02:59:05.684658       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:59:05.684692       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 02:59:07.640906       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0116 02:59:07.640947       1 shared_informer.go:318] Caches are synced for resource quota
	I0116 02:59:08.093726       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0116 02:59:08.093771       1 shared_informer.go:318] Caches are synced for garbage collector
	W0116 02:59:09.104520       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:59:09.104558       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [653b92beb0f55c90a6fc42be3424ed34e624a629bea0fae97ea010f8006e8815] <==
	I0116 02:55:39.506662       1 server_others.go:69] "Using iptables proxy"
	I0116 02:55:39.559217       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0116 02:55:39.605668       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0116 02:55:39.608204       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:55:39.608243       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0116 02:55:39.608252       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0116 02:55:39.608297       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:55:39.608520       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:55:39.608530       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:55:39.609630       1 config.go:188] "Starting service config controller"
	I0116 02:55:39.609668       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:55:39.609687       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:55:39.609695       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:55:39.611404       1 config.go:315] "Starting node config controller"
	I0116 02:55:39.611418       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:55:39.709902       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:55:39.709954       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:55:39.711554       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [79153b07155cf33f0f0fda110c5ea9d3a1f2e3c7f10d052d62d439b265cadc46] <==
	W0116 02:55:23.604486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:55:23.604507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 02:55:23.604641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:55:23.604767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 02:55:23.604741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:55:23.604880       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 02:55:23.611888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:55:23.612453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:55:23.611975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:55:23.612713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:55:23.612025       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:55:23.612817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:55:23.612113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:55:23.612896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 02:55:23.612164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:55:23.612969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:55:23.612309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:55:23.613052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:55:23.612354       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:55:23.613161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 02:55:23.612416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:55:23.613245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:55:23.612686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 02:55:23.613332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0116 02:55:24.696625       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 16 02:58:56 addons-843965 kubelet[1340]: I0116 02:58:56.154818    1340 scope.go:117] "RemoveContainer" containerID="08ecd963398d872f9f44c765af56e40f655b96eed3e7e8f732529c7091d74338"
	Jan 16 02:58:56 addons-843965 kubelet[1340]: E0116 02:58:56.155054    1340 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(8122a341-637a-43d1-99b4-4f74cfcb03f0)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="8122a341-637a-43d1-99b4-4f74cfcb03f0"
	Jan 16 02:59:00 addons-843965 kubelet[1340]: I0116 02:59:00.276739    1340 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsd6b\" (UniqueName: \"kubernetes.io/projected/8122a341-637a-43d1-99b4-4f74cfcb03f0-kube-api-access-jsd6b\") pod \"8122a341-637a-43d1-99b4-4f74cfcb03f0\" (UID: \"8122a341-637a-43d1-99b4-4f74cfcb03f0\") "
	Jan 16 02:59:00 addons-843965 kubelet[1340]: I0116 02:59:00.279379    1340 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8122a341-637a-43d1-99b4-4f74cfcb03f0-kube-api-access-jsd6b" (OuterVolumeSpecName: "kube-api-access-jsd6b") pod "8122a341-637a-43d1-99b4-4f74cfcb03f0" (UID: "8122a341-637a-43d1-99b4-4f74cfcb03f0"). InnerVolumeSpecName "kube-api-access-jsd6b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:59:00 addons-843965 kubelet[1340]: I0116 02:59:00.377917    1340 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jsd6b\" (UniqueName: \"kubernetes.io/projected/8122a341-637a-43d1-99b4-4f74cfcb03f0-kube-api-access-jsd6b\") on node \"addons-843965\" DevicePath \"\""
	Jan 16 02:59:01 addons-843965 kubelet[1340]: I0116 02:59:01.171464    1340 scope.go:117] "RemoveContainer" containerID="08ecd963398d872f9f44c765af56e40f655b96eed3e7e8f732529c7091d74338"
	Jan 16 02:59:02 addons-843965 kubelet[1340]: I0116 02:59:02.055303    1340 scope.go:117] "RemoveContainer" containerID="57ee0cc7b4e5f9ae6fd316877f14a52bf436b46725202511ae8d851054cb5dae"
	Jan 16 02:59:02 addons-843965 kubelet[1340]: I0116 02:59:02.060341    1340 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="29bcb981-a7ba-4529-b48a-60899f98fab1" path="/var/lib/kubelet/pods/29bcb981-a7ba-4529-b48a-60899f98fab1/volumes"
	Jan 16 02:59:02 addons-843965 kubelet[1340]: I0116 02:59:02.060788    1340 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8122a341-637a-43d1-99b4-4f74cfcb03f0" path="/var/lib/kubelet/pods/8122a341-637a-43d1-99b4-4f74cfcb03f0/volumes"
	Jan 16 02:59:02 addons-843965 kubelet[1340]: I0116 02:59:02.061240    1340 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f79329d9-c06d-4e31-893e-dfe127d4a526" path="/var/lib/kubelet/pods/f79329d9-c06d-4e31-893e-dfe127d4a526/volumes"
	Jan 16 02:59:02 addons-843965 kubelet[1340]: I0116 02:59:02.198655    1340 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-xkxj7" podStartSLOduration=16.903681031 podCreationTimestamp="2024-01-16 02:58:44 +0000 UTC" firstStartedPulling="2024-01-16 02:58:44.593888822 +0000 UTC m=+198.701051568" lastFinishedPulling="2024-01-16 02:58:45.887638481 +0000 UTC m=+199.994801227" observedRunningTime="2024-01-16 02:59:02.197036002 +0000 UTC m=+216.304198756" watchObservedRunningTime="2024-01-16 02:59:02.19743069 +0000 UTC m=+216.304593436"
	Jan 16 02:59:03 addons-843965 kubelet[1340]: I0116 02:59:03.187208    1340 scope.go:117] "RemoveContainer" containerID="57ee0cc7b4e5f9ae6fd316877f14a52bf436b46725202511ae8d851054cb5dae"
	Jan 16 02:59:03 addons-843965 kubelet[1340]: I0116 02:59:03.187588    1340 scope.go:117] "RemoveContainer" containerID="a12473b576c849459c101313d8525a75d4d588043b2474fa75dbad53d0535652"
	Jan 16 02:59:03 addons-843965 kubelet[1340]: E0116 02:59:03.187922    1340 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-xkxj7_default(49c376b7-0ca8-4660-ac4c-963b0a384e80)\"" pod="default/hello-world-app-5d77478584-xkxj7" podUID="49c376b7-0ca8-4660-ac4c-963b0a384e80"
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.195873    1340 scope.go:117] "RemoveContainer" containerID="14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02"
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.203051    1340 scope.go:117] "RemoveContainer" containerID="14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02"
	Jan 16 02:59:05 addons-843965 kubelet[1340]: E0116 02:59:05.203644    1340 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\": not found" containerID="14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02"
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.203692    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02"} err="failed to get container status \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\": rpc error: code = NotFound desc = an error occurred when try to find container \"14451960e63ec4fa5c19c93d1e1dd5a9e14d77b7a0c185249c7e13cde6eb3b02\": not found"
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.211011    1340 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a7d1424-2d22-4d7c-9004-a365a5790e74-webhook-cert\") pod \"0a7d1424-2d22-4d7c-9004-a365a5790e74\" (UID: \"0a7d1424-2d22-4d7c-9004-a365a5790e74\") "
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.211068    1340 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmh5j\" (UniqueName: \"kubernetes.io/projected/0a7d1424-2d22-4d7c-9004-a365a5790e74-kube-api-access-kmh5j\") pod \"0a7d1424-2d22-4d7c-9004-a365a5790e74\" (UID: \"0a7d1424-2d22-4d7c-9004-a365a5790e74\") "
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.213657    1340 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7d1424-2d22-4d7c-9004-a365a5790e74-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0a7d1424-2d22-4d7c-9004-a365a5790e74" (UID: "0a7d1424-2d22-4d7c-9004-a365a5790e74"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.216269    1340 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7d1424-2d22-4d7c-9004-a365a5790e74-kube-api-access-kmh5j" (OuterVolumeSpecName: "kube-api-access-kmh5j") pod "0a7d1424-2d22-4d7c-9004-a365a5790e74" (UID: "0a7d1424-2d22-4d7c-9004-a365a5790e74"). InnerVolumeSpecName "kube-api-access-kmh5j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.311579    1340 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kmh5j\" (UniqueName: \"kubernetes.io/projected/0a7d1424-2d22-4d7c-9004-a365a5790e74-kube-api-access-kmh5j\") on node \"addons-843965\" DevicePath \"\""
	Jan 16 02:59:05 addons-843965 kubelet[1340]: I0116 02:59:05.311618    1340 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a7d1424-2d22-4d7c-9004-a365a5790e74-webhook-cert\") on node \"addons-843965\" DevicePath \"\""
	Jan 16 02:59:06 addons-843965 kubelet[1340]: I0116 02:59:06.058277    1340 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0a7d1424-2d22-4d7c-9004-a365a5790e74" path="/var/lib/kubelet/pods/0a7d1424-2d22-4d7c-9004-a365a5790e74/volumes"
	
	
	==> storage-provisioner [52cc6edb069f5ef20c0e1aad56d892e4804d050c4b15c380e01c9531cd31f778] <==
	I0116 02:55:44.382026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:55:44.416843       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:55:44.416950       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:55:44.433293       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:55:44.435186       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-843965_5dbe9549-f38f-487e-be21-e9fddd196f3e!
	I0116 02:55:44.440583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ad83ebc-61b6-482a-a784-f8e0ed412c1a", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-843965_5dbe9549-f38f-487e-be21-e9fddd196f3e became leader
	I0116 02:55:44.535941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-843965_5dbe9549-f38f-487e-be21-e9fddd196f3e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-843965 -n addons-843965
helpers_test.go:261: (dbg) Run:  kubectl --context addons-843965 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (36.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (8.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-rw7hq" [828c6c00-8b50-435a-8869-35da939528b7] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00422761s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-843965
addons_test.go:860: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable cloud-spanner -p addons-843965: exit status 11 (792.485854ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-16T02:58:21Z" level=error msg="stat /run/containerd/runc/k8s.io/afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:861: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 addons disable cloud-spanner -p addons-843965" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-843965
helpers_test.go:235: (dbg) docker inspect addons-843965:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0",
	        "Created": "2024-01-16T02:55:00.86248583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1892575,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T02:55:01.184798336Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0/hostname",
	        "HostsPath": "/var/lib/docker/containers/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0/hosts",
	        "LogPath": "/var/lib/docker/containers/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0/957975e94f70b604ef2fd38a804b6a640f2a2481919df990ffd0056ea75f36a0-json.log",
	        "Name": "/addons-843965",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-843965:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-843965",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2157af1e163464c740e3a071e36883785e37de9b175ba968e06bb16d5c79b14e-init/diff:/var/lib/docker/overlay2/261e7c2ec33123e281bd6870ab3b0bda4a6870d39bd5f5e877084941df0b6b78/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2157af1e163464c740e3a071e36883785e37de9b175ba968e06bb16d5c79b14e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2157af1e163464c740e3a071e36883785e37de9b175ba968e06bb16d5c79b14e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2157af1e163464c740e3a071e36883785e37de9b175ba968e06bb16d5c79b14e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-843965",
	                "Source": "/var/lib/docker/volumes/addons-843965/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-843965",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-843965",
	                "name.minikube.sigs.k8s.io": "addons-843965",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "71519a76ae7b8526ca61cef33e7b5afbdbeb9f2ef2e9b81aad28660efab78e1c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35022"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35019"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35021"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35020"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/71519a76ae7b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-843965": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "957975e94f70",
	                        "addons-843965"
	                    ],
	                    "NetworkID": "c66612f51545ad0e83b9184eae5568eb04ff39420657456eeae92cdcba98b2d9",
	                    "EndpointID": "45a141facfe91b301d1f8b5c2b54dd492b90e66b665e215b23d0573fc06ab2f5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-843965 -n addons-843965
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-843965 logs -n 25: (1.687431747s)
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC | 16 Jan 24 02:53 UTC |
	| delete  | -p download-only-807644                                                                     | download-only-807644   | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC | 16 Jan 24 02:53 UTC |
	| start   | -o=json --download-only                                                                     | download-only-111300   | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC |                     |
	|         | -p download-only-111300                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| delete  | -p download-only-111300                                                                     | download-only-111300   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| start   | -o=json --download-only                                                                     | download-only-795548   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | -p download-only-795548                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| delete  | -p download-only-795548                                                                     | download-only-795548   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| delete  | -p download-only-807644                                                                     | download-only-807644   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| delete  | -p download-only-111300                                                                     | download-only-111300   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| delete  | -p download-only-795548                                                                     | download-only-795548   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| start   | --download-only -p                                                                          | download-docker-734822 | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | download-docker-734822                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-734822                                                                   | download-docker-734822 | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-337521   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | binary-mirror-337521                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34529                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-337521                                                                     | binary-mirror-337521   | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | addons-843965                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | addons-843965                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-843965 --wait=true                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-843965 ip                                                                            | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:57 UTC |
	| addons  | addons-843965 addons disable                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:57 UTC |
	|         | -p addons-843965                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-843965 ssh cat                                                                       | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:57 UTC |
	|         | /opt/local-path-provisioner/pvc-7b134c94-38a8-4396-b5f8-502ac0f0b814_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-843965 addons disable                                                                | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:57 UTC | 16 Jan 24 02:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-843965 addons                                                                        | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-843965          | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC |                     |
	|         | addons-843965                                                                               |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:54:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:54:53.982631 1892116 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:54:53.982843 1892116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:54:53.982873 1892116 out.go:309] Setting ErrFile to fd 2...
	I0116 02:54:53.982894 1892116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:54:53.983172 1892116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 02:54:53.983655 1892116 out.go:303] Setting JSON to false
	I0116 02:54:53.984570 1892116 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34630,"bootTime":1705339064,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 02:54:53.984678 1892116 start.go:138] virtualization:  
	I0116 02:54:53.987343 1892116 out.go:177] * [addons-843965] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 02:54:53.989970 1892116 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:54:53.990108 1892116 notify.go:220] Checking for updates...
	I0116 02:54:53.994514 1892116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:54:53.996764 1892116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 02:54:53.998946 1892116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 02:54:54.003937 1892116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 02:54:54.006071 1892116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:54:54.008130 1892116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:54:54.031881 1892116 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:54:54.032010 1892116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:54:54.114602 1892116 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-16 02:54:54.104802624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:54:54.114732 1892116 docker.go:295] overlay module found
	I0116 02:54:54.116872 1892116 out.go:177] * Using the docker driver based on user configuration
	I0116 02:54:54.118745 1892116 start.go:298] selected driver: docker
	I0116 02:54:54.118759 1892116 start.go:902] validating driver "docker" against <nil>
	I0116 02:54:54.118772 1892116 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:54:54.119461 1892116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:54:54.179982 1892116 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-16 02:54:54.170247676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:54:54.180142 1892116 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:54:54.180394 1892116 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:54:54.182151 1892116 out.go:177] * Using Docker driver with root privileges
	I0116 02:54:54.183728 1892116 cni.go:84] Creating CNI manager for ""
	I0116 02:54:54.183750 1892116 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:54:54.183762 1892116 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:54:54.183773 1892116 start_flags.go:321] config:
	{Name:addons-843965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-843965 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:54:54.185748 1892116 out.go:177] * Starting control plane node addons-843965 in cluster addons-843965
	I0116 02:54:54.187393 1892116 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0116 02:54:54.189042 1892116 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:54:54.190727 1892116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:54:54.190781 1892116 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0116 02:54:54.190805 1892116 cache.go:56] Caching tarball of preloaded images
	I0116 02:54:54.190817 1892116 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:54:54.190882 1892116 preload.go:174] Found /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0116 02:54:54.190892 1892116 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0116 02:54:54.191246 1892116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/config.json ...
	I0116 02:54:54.191279 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/config.json: {Name:mk31bcf33447fff82611ee0607a5f06e45495f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:54:54.208782 1892116 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 02:54:54.208808 1892116 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 02:54:54.208831 1892116 cache.go:194] Successfully downloaded all kic artifacts
	I0116 02:54:54.208890 1892116 start.go:365] acquiring machines lock for addons-843965: {Name:mkc6ac54037945c19e3ff2dd20ef63e1ab89dd31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:54:54.209021 1892116 start.go:369] acquired machines lock for "addons-843965" in 111.767µs
	I0116 02:54:54.209047 1892116 start.go:93] Provisioning new machine with config: &{Name:addons-843965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-843965 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 02:54:54.209127 1892116 start.go:125] createHost starting for "" (driver="docker")
	I0116 02:54:54.211622 1892116 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0116 02:54:54.211882 1892116 start.go:159] libmachine.API.Create for "addons-843965" (driver="docker")
	I0116 02:54:54.211935 1892116 client.go:168] LocalClient.Create starting
	I0116 02:54:54.212053 1892116 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem
	I0116 02:54:54.964023 1892116 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem
	I0116 02:54:55.169919 1892116 cli_runner.go:164] Run: docker network inspect addons-843965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 02:54:55.190812 1892116 cli_runner.go:211] docker network inspect addons-843965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 02:54:55.190909 1892116 network_create.go:281] running [docker network inspect addons-843965] to gather additional debugging logs...
	I0116 02:54:55.190934 1892116 cli_runner.go:164] Run: docker network inspect addons-843965
	W0116 02:54:55.208024 1892116 cli_runner.go:211] docker network inspect addons-843965 returned with exit code 1
	I0116 02:54:55.208060 1892116 network_create.go:284] error running [docker network inspect addons-843965]: docker network inspect addons-843965: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-843965 not found
	I0116 02:54:55.208073 1892116 network_create.go:286] output of [docker network inspect addons-843965]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-843965 not found
	
	** /stderr **
	I0116 02:54:55.208167 1892116 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:54:55.228585 1892116 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024cb360}
	I0116 02:54:55.228628 1892116 network_create.go:124] attempt to create docker network addons-843965 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 02:54:55.228689 1892116 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-843965 addons-843965
	I0116 02:54:55.321169 1892116 network_create.go:108] docker network addons-843965 192.168.49.0/24 created
	I0116 02:54:55.321202 1892116 kic.go:121] calculated static IP "192.168.49.2" for the "addons-843965" container
	I0116 02:54:55.321278 1892116 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 02:54:55.344148 1892116 cli_runner.go:164] Run: docker volume create addons-843965 --label name.minikube.sigs.k8s.io=addons-843965 --label created_by.minikube.sigs.k8s.io=true
	I0116 02:54:55.368295 1892116 oci.go:103] Successfully created a docker volume addons-843965
	I0116 02:54:55.368380 1892116 cli_runner.go:164] Run: docker run --rm --name addons-843965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-843965 --entrypoint /usr/bin/test -v addons-843965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 02:54:56.564693 1892116 cli_runner.go:217] Completed: docker run --rm --name addons-843965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-843965 --entrypoint /usr/bin/test -v addons-843965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.196272123s)
	I0116 02:54:56.564725 1892116 oci.go:107] Successfully prepared a docker volume addons-843965
	I0116 02:54:56.564754 1892116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:54:56.564776 1892116 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 02:54:56.564864 1892116 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-843965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 02:55:00.776081 1892116 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-843965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.21116914s)
	I0116 02:55:00.776115 1892116 kic.go:203] duration metric: took 4.211336 seconds to extract preloaded images to volume
	W0116 02:55:00.776258 1892116 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 02:55:00.776402 1892116 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 02:55:00.844475 1892116 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-843965 --name addons-843965 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-843965 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-843965 --network addons-843965 --ip 192.168.49.2 --volume addons-843965:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 02:55:01.194469 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Running}}
	I0116 02:55:01.224105 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:01.259127 1892116 cli_runner.go:164] Run: docker exec addons-843965 stat /var/lib/dpkg/alternatives/iptables
	I0116 02:55:01.328915 1892116 oci.go:144] the created container "addons-843965" has a running status.
	I0116 02:55:01.328947 1892116 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa...
	I0116 02:55:01.834881 1892116 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 02:55:01.866408 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:01.900012 1892116 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 02:55:01.900041 1892116 kic_runner.go:114] Args: [docker exec --privileged addons-843965 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 02:55:01.983792 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:02.017410 1892116 machine.go:88] provisioning docker machine ...
	I0116 02:55:02.017478 1892116 ubuntu.go:169] provisioning hostname "addons-843965"
	I0116 02:55:02.017548 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:02.067745 1892116 main.go:141] libmachine: Using SSH client type: native
	I0116 02:55:02.068212 1892116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35023 <nil> <nil>}
	I0116 02:55:02.068232 1892116 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-843965 && echo "addons-843965" | sudo tee /etc/hostname
	I0116 02:55:02.247132 1892116 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-843965
	
	I0116 02:55:02.247224 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:02.267119 1892116 main.go:141] libmachine: Using SSH client type: native
	I0116 02:55:02.267536 1892116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35023 <nil> <nil>}
	I0116 02:55:02.267558 1892116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-843965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-843965/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-843965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:55:02.411819 1892116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:55:02.411849 1892116 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17967-1885793/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-1885793/.minikube}
	I0116 02:55:02.411885 1892116 ubuntu.go:177] setting up certificates
	I0116 02:55:02.411896 1892116 provision.go:83] configureAuth start
	I0116 02:55:02.411977 1892116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-843965
	I0116 02:55:02.431073 1892116 provision.go:138] copyHostCerts
	I0116 02:55:02.431165 1892116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.pem (1078 bytes)
	I0116 02:55:02.431347 1892116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/cert.pem (1123 bytes)
	I0116 02:55:02.431453 1892116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/key.pem (1679 bytes)
	I0116 02:55:02.431531 1892116 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem org=jenkins.addons-843965 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-843965]
	I0116 02:55:02.952608 1892116 provision.go:172] copyRemoteCerts
	I0116 02:55:02.952694 1892116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:55:02.952738 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:02.972559 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.071821 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:55:03.100569 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 02:55:03.129868 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:55:03.158693 1892116 provision.go:86] duration metric: configureAuth took 746.778527ms
	I0116 02:55:03.158735 1892116 ubuntu.go:193] setting minikube options for container-runtime
	I0116 02:55:03.158926 1892116 config.go:182] Loaded profile config "addons-843965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:55:03.158939 1892116 machine.go:91] provisioned docker machine in 1.14150702s
	I0116 02:55:03.158946 1892116 client.go:171] LocalClient.Create took 8.947003804s
	I0116 02:55:03.158968 1892116 start.go:167] duration metric: libmachine.API.Create for "addons-843965" took 8.947087412s
	I0116 02:55:03.158981 1892116 start.go:300] post-start starting for "addons-843965" (driver="docker")
	I0116 02:55:03.158990 1892116 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:55:03.159043 1892116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:55:03.159089 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:03.177044 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.276430 1892116 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:55:03.280762 1892116 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 02:55:03.280849 1892116 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 02:55:03.280869 1892116 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 02:55:03.280880 1892116 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 02:55:03.280891 1892116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-1885793/.minikube/addons for local assets ...
	I0116 02:55:03.280970 1892116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-1885793/.minikube/files for local assets ...
	I0116 02:55:03.281000 1892116 start.go:303] post-start completed in 122.014124ms
	I0116 02:55:03.281303 1892116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-843965
	I0116 02:55:03.299093 1892116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/config.json ...
	I0116 02:55:03.299376 1892116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:55:03.299435 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:03.318138 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.411511 1892116 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 02:55:03.417367 1892116 start.go:128] duration metric: createHost completed in 9.20822512s
	I0116 02:55:03.417390 1892116 start.go:83] releasing machines lock for "addons-843965", held for 9.208361099s
	I0116 02:55:03.417480 1892116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-843965
	I0116 02:55:03.435237 1892116 ssh_runner.go:195] Run: cat /version.json
	I0116 02:55:03.435300 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:03.435545 1892116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:55:03.435611 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:03.460290 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.460953 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:03.558805 1892116 ssh_runner.go:195] Run: systemctl --version
	I0116 02:55:03.695325 1892116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:55:03.701099 1892116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0116 02:55:03.730065 1892116 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0116 02:55:03.730158 1892116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:55:03.763496 1892116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 02:55:03.763515 1892116 start.go:475] detecting cgroup driver to use...
	I0116 02:55:03.763545 1892116 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 02:55:03.763593 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 02:55:03.777393 1892116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:55:03.790091 1892116 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:55:03.790199 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:55:03.806387 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:55:03.822379 1892116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:55:03.912719 1892116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:55:04.015594 1892116 docker.go:233] disabling docker service ...
	I0116 02:55:04.015688 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:55:04.038615 1892116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:55:04.052792 1892116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:55:04.159590 1892116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:55:04.255107 1892116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:55:04.269231 1892116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:55:04.290247 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 02:55:04.304398 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 02:55:04.317293 1892116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 02:55:04.317379 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 02:55:04.329536 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:55:04.342311 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 02:55:04.354955 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:55:04.367264 1892116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:55:04.379493 1892116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 02:55:04.392026 1892116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:55:04.402990 1892116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:55:04.414692 1892116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:55:04.530217 1892116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 02:55:04.691114 1892116 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0116 02:55:04.691238 1892116 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0116 02:55:04.696064 1892116 start.go:543] Will wait 60s for crictl version
	I0116 02:55:04.696175 1892116 ssh_runner.go:195] Run: which crictl
	I0116 02:55:04.700578 1892116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:55:04.745171 1892116 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0116 02:55:04.745293 1892116 ssh_runner.go:195] Run: containerd --version
	I0116 02:55:04.781244 1892116 ssh_runner.go:195] Run: containerd --version
	I0116 02:55:04.817162 1892116 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0116 02:55:04.818790 1892116 cli_runner.go:164] Run: docker network inspect addons-843965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 02:55:04.835837 1892116 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 02:55:04.840319 1892116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:55:04.853406 1892116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:55:04.853587 1892116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:55:04.896139 1892116 containerd.go:612] all images are preloaded for containerd runtime.
	I0116 02:55:04.896165 1892116 containerd.go:519] Images already preloaded, skipping extraction
	I0116 02:55:04.896234 1892116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:55:04.934124 1892116 containerd.go:612] all images are preloaded for containerd runtime.
	I0116 02:55:04.934148 1892116 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:55:04.934204 1892116 ssh_runner.go:195] Run: sudo crictl info
	I0116 02:55:04.974382 1892116 cni.go:84] Creating CNI manager for ""
	I0116 02:55:04.974408 1892116 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:55:04.974464 1892116 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:55:04.974497 1892116 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-843965 NodeName:addons-843965 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:55:04.974643 1892116 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-843965"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:55:04.974712 1892116 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-843965 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-843965 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:55:04.974777 1892116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:55:04.985383 1892116 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:55:04.985471 1892116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:55:04.995828 1892116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0116 02:55:05.020096 1892116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:55:05.043537 1892116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0116 02:55:05.066048 1892116 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 02:55:05.070956 1892116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:55:05.085006 1892116 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965 for IP: 192.168.49.2
	I0116 02:55:05.085040 1892116 certs.go:190] acquiring lock for shared ca certs: {Name:mk53d39e364f11aa45d491413f4acdef0406f659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:05.085903 1892116 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key
	I0116 02:55:05.566600 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt ...
	I0116 02:55:05.566632 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt: {Name:mkdc5ed6571f50d2e0aab8c7fed4eb3fb81c1731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:05.566826 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key ...
	I0116 02:55:05.566841 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key: {Name:mkd5624a3d41975891289b1ea898068bb8950d9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:05.566927 1892116 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key
	I0116 02:55:06.098738 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.crt ...
	I0116 02:55:06.098768 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.crt: {Name:mk09ac82365c28dac5db824c5d79ac4ca94b7a85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.098954 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key ...
	I0116 02:55:06.098965 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key: {Name:mk274d84808802a3d8948cc4330d55c86d0481be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.099099 1892116 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.key
	I0116 02:55:06.099118 1892116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt with IP's: []
	I0116 02:55:06.684553 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt ...
	I0116 02:55:06.684586 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: {Name:mkc0857303b93f77dcd17b744c4a61aeb7ad070e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.685530 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.key ...
	I0116 02:55:06.685551 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.key: {Name:mk17c81f0815c44f212d704f18242ef523c5ddfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.686226 1892116 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key.dd3b5fb2
	I0116 02:55:06.686254 1892116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:55:06.823278 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt.dd3b5fb2 ...
	I0116 02:55:06.823308 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt.dd3b5fb2: {Name:mk41672ca4d63f94f04fd9d08f2d8af03af51a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.823520 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key.dd3b5fb2 ...
	I0116 02:55:06.823537 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key.dd3b5fb2: {Name:mk6f49ed1d7ec48ea445470d369ab62d1d740e43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:06.823633 1892116 certs.go:337] copying /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt
	I0116 02:55:06.823716 1892116 certs.go:341] copying /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key
	I0116 02:55:06.823772 1892116 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.key
	I0116 02:55:06.823792 1892116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.crt with IP's: []
	I0116 02:55:07.360436 1892116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.crt ...
	I0116 02:55:07.360472 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.crt: {Name:mk190483ecdd5fa8b455db472a83c2adff797c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:07.361304 1892116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.key ...
	I0116 02:55:07.361324 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.key: {Name:mk33a944a0e28c2cdb9a5b4915ed65a60ebf8883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:07.362071 1892116 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:55:07.362127 1892116 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:55:07.362181 1892116 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:55:07.362220 1892116 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem (1679 bytes)
	I0116 02:55:07.362861 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:55:07.393170 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 02:55:07.423315 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:55:07.454769 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 02:55:07.484226 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:55:07.513741 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 02:55:07.542937 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:55:07.571695 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 02:55:07.600808 1892116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:55:07.629580 1892116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:55:07.650745 1892116 ssh_runner.go:195] Run: openssl version
	I0116 02:55:07.657899 1892116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:55:07.669177 1892116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:55:07.673807 1892116 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:55 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:55:07.673889 1892116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:55:07.682345 1892116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:55:07.693617 1892116 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:55:07.697866 1892116 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:55:07.697911 1892116 kubeadm.go:404] StartCluster: {Name:addons-843965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-843965 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:55:07.698034 1892116 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0116 02:55:07.698095 1892116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:55:07.740144 1892116 cri.go:89] found id: ""
	I0116 02:55:07.740216 1892116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:55:07.750820 1892116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:55:07.761422 1892116 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 02:55:07.761520 1892116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:55:07.772351 1892116 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:55:07.772403 1892116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 02:55:07.872338 1892116 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 02:55:07.957511 1892116 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:55:26.102078 1892116 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:55:26.102133 1892116 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:55:26.102215 1892116 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 02:55:26.102267 1892116 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 02:55:26.102300 1892116 kubeadm.go:322] OS: Linux
	I0116 02:55:26.102342 1892116 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 02:55:26.102396 1892116 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 02:55:26.102442 1892116 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 02:55:26.102487 1892116 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 02:55:26.102532 1892116 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 02:55:26.102577 1892116 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 02:55:26.102619 1892116 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0116 02:55:26.102664 1892116 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0116 02:55:26.102707 1892116 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0116 02:55:26.102775 1892116 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:55:26.102865 1892116 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:55:26.102952 1892116 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:55:26.103010 1892116 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:55:26.105033 1892116 out.go:204]   - Generating certificates and keys ...
	I0116 02:55:26.105199 1892116 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:55:26.105285 1892116 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:55:26.105390 1892116 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:55:26.105545 1892116 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:55:26.105610 1892116 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:55:26.105661 1892116 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:55:26.105715 1892116 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:55:26.105842 1892116 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-843965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 02:55:26.105899 1892116 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:55:26.106014 1892116 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-843965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 02:55:26.106080 1892116 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:55:26.106144 1892116 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:55:26.106189 1892116 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:55:26.106244 1892116 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:55:26.106296 1892116 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:55:26.106349 1892116 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:55:26.106419 1892116 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:55:26.106474 1892116 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:55:26.106557 1892116 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:55:26.106623 1892116 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:55:26.108634 1892116 out.go:204]   - Booting up control plane ...
	I0116 02:55:26.108738 1892116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:55:26.108816 1892116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:55:26.108882 1892116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:55:26.108991 1892116 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:55:26.109075 1892116 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:55:26.109115 1892116 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:55:26.109269 1892116 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:55:26.109345 1892116 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.006829 seconds
	I0116 02:55:26.109469 1892116 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:55:26.109728 1892116 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:55:26.109799 1892116 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:55:26.109988 1892116 kubeadm.go:322] [mark-control-plane] Marking the node addons-843965 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:55:26.110045 1892116 kubeadm.go:322] [bootstrap-token] Using token: 5ccrjl.pay5uy3xwb94lc61
	I0116 02:55:26.111921 1892116 out.go:204]   - Configuring RBAC rules ...
	I0116 02:55:26.112030 1892116 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:55:26.112115 1892116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:55:26.112255 1892116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:55:26.112383 1892116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:55:26.112503 1892116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:55:26.112597 1892116 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:55:26.112712 1892116 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:55:26.112755 1892116 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:55:26.112800 1892116 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:55:26.112805 1892116 kubeadm.go:322] 
	I0116 02:55:26.112865 1892116 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:55:26.112870 1892116 kubeadm.go:322] 
	I0116 02:55:26.112947 1892116 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:55:26.112952 1892116 kubeadm.go:322] 
	I0116 02:55:26.112977 1892116 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:55:26.113036 1892116 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:55:26.113087 1892116 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:55:26.113091 1892116 kubeadm.go:322] 
	I0116 02:55:26.113145 1892116 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:55:26.113150 1892116 kubeadm.go:322] 
	I0116 02:55:26.113198 1892116 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:55:26.113202 1892116 kubeadm.go:322] 
	I0116 02:55:26.113255 1892116 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:55:26.113330 1892116 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:55:26.113409 1892116 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:55:26.113414 1892116 kubeadm.go:322] 
	I0116 02:55:26.113698 1892116 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:55:26.113807 1892116 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:55:26.113836 1892116 kubeadm.go:322] 
	I0116 02:55:26.113936 1892116 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5ccrjl.pay5uy3xwb94lc61 \
	I0116 02:55:26.114085 1892116 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6218d0988b2a7aa9cfeacd0df5d75f7b2af48c94d0234c3fb2bf032e099bbd3 \
	I0116 02:55:26.114111 1892116 kubeadm.go:322] 	--control-plane 
	I0116 02:55:26.114116 1892116 kubeadm.go:322] 
	I0116 02:55:26.114203 1892116 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:55:26.114210 1892116 kubeadm.go:322] 
	I0116 02:55:26.114336 1892116 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5ccrjl.pay5uy3xwb94lc61 \
	I0116 02:55:26.114500 1892116 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6218d0988b2a7aa9cfeacd0df5d75f7b2af48c94d0234c3fb2bf032e099bbd3 
	I0116 02:55:26.114536 1892116 cni.go:84] Creating CNI manager for ""
	I0116 02:55:26.114555 1892116 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:55:26.116572 1892116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:55:26.118472 1892116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:55:26.124102 1892116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:55:26.124164 1892116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:55:26.166905 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:55:27.049777 1892116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:55:27.049923 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:27.050005 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=addons-843965 minikube.k8s.io/updated_at=2024_01_16T02_55_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:27.060870 1892116 ops.go:34] apiserver oom_adj: -16
	I0116 02:55:27.230993 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:27.731158 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:28.231997 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:28.731667 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:29.231215 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:29.731159 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:30.231716 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:30.731731 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:31.231768 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:31.731178 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:32.231969 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:32.731699 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:33.231749 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:33.731177 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:34.231507 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:34.731974 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:35.231353 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:35.731232 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:36.231303 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:36.731694 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:37.231742 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:37.731778 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:38.231140 1892116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:55:38.369976 1892116 kubeadm.go:1088] duration metric: took 11.320105765s to wait for elevateKubeSystemPrivileges.
	I0116 02:55:38.370017 1892116 kubeadm.go:406] StartCluster complete in 30.67210981s
	I0116 02:55:38.370036 1892116 settings.go:142] acquiring lock: {Name:mk5ef3d7839aa1301dd151a46eaf62e1b5658d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:38.370159 1892116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 02:55:38.370567 1892116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/kubeconfig: {Name:mk03027f3f7cf4dc9d608a622efae9ada84d58d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:55:38.372979 1892116 config.go:182] Loaded profile config "addons-843965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:55:38.373043 1892116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:55:38.373162 1892116 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 02:55:38.373272 1892116 addons.go:69] Setting yakd=true in profile "addons-843965"
	I0116 02:55:38.373295 1892116 addons.go:234] Setting addon yakd=true in "addons-843965"
	I0116 02:55:38.373336 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.373900 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.374355 1892116 addons.go:69] Setting cloud-spanner=true in profile "addons-843965"
	I0116 02:55:38.374376 1892116 addons.go:234] Setting addon cloud-spanner=true in "addons-843965"
	I0116 02:55:38.374408 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.374818 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.375154 1892116 addons.go:69] Setting metrics-server=true in profile "addons-843965"
	I0116 02:55:38.375184 1892116 addons.go:234] Setting addon metrics-server=true in "addons-843965"
	I0116 02:55:38.375224 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.375676 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.376042 1892116 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-843965"
	I0116 02:55:38.376063 1892116 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-843965"
	I0116 02:55:38.376098 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.376485 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.381781 1892116 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-843965"
	I0116 02:55:38.381846 1892116 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-843965"
	I0116 02:55:38.381883 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.382293 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.384255 1892116 addons.go:69] Setting registry=true in profile "addons-843965"
	I0116 02:55:38.384276 1892116 addons.go:234] Setting addon registry=true in "addons-843965"
	I0116 02:55:38.384312 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.384757 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.418378 1892116 addons.go:69] Setting default-storageclass=true in profile "addons-843965"
	I0116 02:55:38.418460 1892116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-843965"
	I0116 02:55:38.418854 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.419475 1892116 addons.go:69] Setting storage-provisioner=true in profile "addons-843965"
	I0116 02:55:38.419535 1892116 addons.go:234] Setting addon storage-provisioner=true in "addons-843965"
	I0116 02:55:38.419627 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.420205 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.434754 1892116 addons.go:69] Setting gcp-auth=true in profile "addons-843965"
	I0116 02:55:38.434801 1892116 mustload.go:65] Loading cluster: addons-843965
	I0116 02:55:38.435012 1892116 config.go:182] Loaded profile config "addons-843965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:55:38.435293 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.440762 1892116 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-843965"
	I0116 02:55:38.440843 1892116 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-843965"
	I0116 02:55:38.443274 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.448645 1892116 addons.go:69] Setting ingress=true in profile "addons-843965"
	I0116 02:55:38.448720 1892116 addons.go:234] Setting addon ingress=true in "addons-843965"
	I0116 02:55:38.448809 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.449343 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.461706 1892116 addons.go:69] Setting volumesnapshots=true in profile "addons-843965"
	I0116 02:55:38.461774 1892116 addons.go:234] Setting addon volumesnapshots=true in "addons-843965"
	I0116 02:55:38.461852 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.462362 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.462555 1892116 addons.go:69] Setting ingress-dns=true in profile "addons-843965"
	I0116 02:55:38.462589 1892116 addons.go:234] Setting addon ingress-dns=true in "addons-843965"
	I0116 02:55:38.462638 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.463030 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.488672 1892116 addons.go:69] Setting inspektor-gadget=true in profile "addons-843965"
	I0116 02:55:38.488749 1892116 addons.go:234] Setting addon inspektor-gadget=true in "addons-843965"
	I0116 02:55:38.488824 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.489463 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.601547 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 02:55:38.603485 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 02:55:38.608544 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 02:55:38.612970 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 02:55:38.614998 1892116 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 02:55:38.626292 1892116 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 02:55:38.628201 1892116 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 02:55:38.628256 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 02:55:38.628346 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.635730 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 02:55:38.647090 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 02:55:38.650246 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 02:55:38.653494 1892116 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 02:55:38.653501 1892116 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 02:55:38.653507 1892116 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 02:55:38.662841 1892116 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 02:55:38.671723 1892116 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 02:55:38.671744 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 02:55:38.671811 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.669785 1892116 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:55:38.676956 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 02:55:38.677077 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.699049 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 02:55:38.699075 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 02:55:38.699139 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.701165 1892116 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 02:55:38.701193 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 02:55:38.701260 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.738490 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 02:55:38.740681 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.740855 1892116 addons.go:234] Setting addon default-storageclass=true in "addons-843965"
	I0116 02:55:38.741889 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.742387 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.742602 1892116 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 02:55:38.754044 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 02:55:38.754068 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 02:55:38.754137 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.765377 1892116 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-843965"
	I0116 02:55:38.765415 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:38.765897 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:38.743341 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 02:55:38.769676 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 02:55:38.769739 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.743374 1892116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:55:38.791722 1892116 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:55:38.791742 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:55:38.791803 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.795745 1892116 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 02:55:38.807058 1892116 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 02:55:38.813085 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 02:55:38.812981 1892116 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:55:38.815447 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 02:55:38.815529 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.815698 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 02:55:38.815784 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.867847 1892116 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 02:55:38.869966 1892116 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:55:38.872767 1892116 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:55:38.875027 1892116 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:55:38.875049 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 02:55:38.875115 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:38.888906 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:38.912763 1892116 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-843965" context rescaled to 1 replicas
	I0116 02:55:38.912801 1892116 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 02:55:38.915232 1892116 out.go:177] * Verifying Kubernetes components...
	I0116 02:55:38.917154 1892116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:55:38.914784 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:38.956651 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:38.990025 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:38.991064 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.017681 1892116 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:55:39.017701 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:55:39.017761 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:39.018041 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.025673 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.072329 1892116 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 02:55:39.074815 1892116 out.go:177]   - Using image docker.io/busybox:stable
	I0116 02:55:39.078835 1892116 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:55:39.078859 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 02:55:39.078923 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:39.087904 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.092052 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.107831 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.125796 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.141416 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:39.158438 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	W0116 02:55:39.162318 1892116 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0116 02:55:39.162350 1892116 retry.go:31] will retry after 280.069557ms: ssh: handshake failed: EOF
	I0116 02:55:39.238287 1892116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:55:39.241332 1892116 node_ready.go:35] waiting up to 6m0s for node "addons-843965" to be "Ready" ...
	I0116 02:55:39.244930 1892116 node_ready.go:49] node "addons-843965" has status "Ready":"True"
	I0116 02:55:39.244958 1892116 node_ready.go:38] duration metric: took 3.59417ms waiting for node "addons-843965" to be "Ready" ...
	I0116 02:55:39.244968 1892116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:55:39.258479 1892116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace to be "Ready" ...
	I0116 02:55:39.578183 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:55:39.723357 1892116 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 02:55:39.723427 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 02:55:39.738087 1892116 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 02:55:39.738150 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 02:55:39.748043 1892116 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 02:55:39.748114 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 02:55:39.788848 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:55:39.794914 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 02:55:39.795009 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 02:55:39.826650 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 02:55:39.840067 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 02:55:39.840130 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 02:55:39.881986 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 02:55:39.882048 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 02:55:39.906942 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:55:39.927536 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:55:39.970589 1892116 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:55:39.970658 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 02:55:40.007730 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:55:40.017853 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:55:40.019786 1892116 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 02:55:40.019876 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 02:55:40.029999 1892116 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 02:55:40.030111 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 02:55:40.122819 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 02:55:40.122894 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 02:55:40.196576 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 02:55:40.196646 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 02:55:40.223707 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 02:55:40.223784 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 02:55:40.298825 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:55:40.303329 1892116 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 02:55:40.303356 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 02:55:40.373241 1892116 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:55:40.373310 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 02:55:40.388298 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 02:55:40.388368 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 02:55:40.457471 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 02:55:40.457543 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 02:55:40.495696 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 02:55:40.495756 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 02:55:40.586208 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 02:55:40.586343 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 02:55:40.682276 1892116 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:55:40.682338 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 02:55:40.703452 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:55:40.740678 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 02:55:40.740747 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 02:55:40.851631 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 02:55:40.851690 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 02:55:40.870471 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:55:40.964407 1892116 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:55:40.964472 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 02:55:41.060993 1892116 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 02:55:41.061066 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 02:55:41.125124 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 02:55:41.125202 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 02:55:41.225263 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:55:41.264831 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:41.282777 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 02:55:41.282848 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 02:55:41.383341 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 02:55:41.383412 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 02:55:41.523664 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 02:55:41.523736 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 02:55:41.552610 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 02:55:41.552688 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 02:55:41.578519 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 02:55:41.578585 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 02:55:41.625357 1892116 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:55:41.625431 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 02:55:41.662015 1892116 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.423687364s)
	I0116 02:55:41.662105 1892116 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 02:55:41.668724 1892116 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:55:41.668792 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 02:55:41.695785 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:55:41.752697 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:55:41.819117 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.240854583s)
	I0116 02:55:43.290953 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:43.557401 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.768479601s)
	I0116 02:55:43.557536 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.730819609s)
	I0116 02:55:43.557594 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.650581142s)
	I0116 02:55:45.302807 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:45.554324 1892116 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 02:55:45.554676 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:45.603355 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:45.855239 1892116 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 02:55:45.892859 1892116 addons.go:234] Setting addon gcp-auth=true in "addons-843965"
	I0116 02:55:45.892912 1892116 host.go:66] Checking if "addons-843965" exists ...
	I0116 02:55:45.893364 1892116 cli_runner.go:164] Run: docker container inspect addons-843965 --format={{.State.Status}}
	I0116 02:55:45.918428 1892116 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 02:55:45.918484 1892116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-843965
	I0116 02:55:45.950905 1892116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/addons-843965/id_rsa Username:docker}
	I0116 02:55:46.520754 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.593133861s)
	I0116 02:55:46.520788 1892116 addons.go:470] Verifying addon ingress=true in "addons-843965"
	I0116 02:55:46.523950 1892116 out.go:177] * Verifying ingress addon...
	I0116 02:55:46.520982 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.513183313s)
	I0116 02:55:46.521098 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.503174507s)
	I0116 02:55:46.521134 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.222278022s)
	I0116 02:55:46.521212 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.817689553s)
	I0116 02:55:46.521332 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.295972177s)
	I0116 02:55:46.521344 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.650715796s)
	I0116 02:55:46.526743 1892116 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 02:55:46.526967 1892116 addons.go:470] Verifying addon metrics-server=true in "addons-843965"
	I0116 02:55:46.526993 1892116 addons.go:470] Verifying addon registry=true in "addons-843965"
	I0116 02:55:46.528648 1892116 out.go:177] * Verifying registry addon...
	W0116 02:55:46.527127 1892116 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:55:46.530591 1892116 retry.go:31] will retry after 276.639829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:55:46.531398 1892116 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 02:55:46.531575 1892116 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-843965 service yakd-dashboard -n yakd-dashboard
	
	I0116 02:55:46.542871 1892116 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 02:55:46.542898 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0116 02:55:46.545552 1892116 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0116 02:55:46.548622 1892116 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 02:55:46.548643 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:46.807414 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:55:47.031541 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:47.042018 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:47.305980 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:47.545955 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:47.547119 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:47.988542 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.292664578s)
	I0116 02:55:47.988623 1892116 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-843965"
	I0116 02:55:47.991118 1892116 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 02:55:47.988870 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.236092511s)
	I0116 02:55:47.988908 1892116 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.070460748s)
	I0116 02:55:47.995495 1892116 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:55:47.994280 1892116 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 02:55:47.999947 1892116 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 02:55:48.003669 1892116 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 02:55:48.003753 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 02:55:48.014111 1892116 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 02:55:48.014140 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:48.031677 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:48.039612 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:48.071040 1892116 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 02:55:48.071104 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 02:55:48.148108 1892116 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:55:48.148180 1892116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 02:55:48.193731 1892116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:55:48.503743 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:48.531211 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:48.536914 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:48.838194 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.030719354s)
	I0116 02:55:49.003608 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:49.032053 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:49.036638 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:49.295224 1892116 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.101444905s)
	I0116 02:55:49.298081 1892116 addons.go:470] Verifying addon gcp-auth=true in "addons-843965"
	I0116 02:55:49.300721 1892116 out.go:177] * Verifying gcp-auth addon...
	I0116 02:55:49.303740 1892116 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 02:55:49.320008 1892116 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 02:55:49.320034 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:49.503316 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:49.532151 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:49.536788 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:49.765485 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:49.808330 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:50.005352 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:50.031445 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:50.036514 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:50.308383 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:50.503235 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:50.532626 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:50.538408 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:50.808398 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:51.003196 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:51.031928 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:51.036386 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:51.307916 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:51.503934 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:51.531618 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:51.536130 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:51.807773 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:52.008334 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:52.032182 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:52.036916 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:52.266410 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:52.312469 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:52.505386 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:52.532179 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:52.537046 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:52.809595 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:53.003562 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:53.033057 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:53.038139 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:53.307372 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:53.503667 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:53.532071 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:53.537019 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:53.808248 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:54.004455 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:54.031715 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:54.036878 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:54.270937 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:54.307789 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:54.504041 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:54.532463 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:54.537903 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:54.807802 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:55.004322 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:55.042284 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:55.043533 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:55.311980 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:55.503533 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:55.531380 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:55.535789 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:55.807614 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:56.008664 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:56.032546 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:56.038995 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:56.308020 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:56.502952 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:56.531474 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:56.536537 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:56.766842 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:56.807399 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:57.004713 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:57.031557 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:57.035725 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:57.308600 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:57.503326 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:57.532279 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:57.537287 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:57.808274 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:58.006356 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:58.032150 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:58.036471 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:58.307764 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:58.503959 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:58.531595 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:58.536238 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:58.807603 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:59.003755 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:59.031410 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:59.039727 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:59.267530 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:55:59.308011 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:55:59.503557 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:55:59.531717 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:55:59.536557 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:55:59.807576 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:00.009781 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:00.050172 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:00.051023 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:00.307967 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:00.503142 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:00.531114 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:00.536344 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:00.807884 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:01.003941 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:01.032042 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:01.036681 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:01.307696 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:01.503291 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:01.531894 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:01.536336 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:01.765603 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:01.808389 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:02.004544 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:02.032280 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:02.037211 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:02.308101 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:02.503877 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:02.531172 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:02.536467 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:02.808091 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:03.003462 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:03.031946 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:03.036502 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:03.307219 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:03.503310 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:03.531337 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:03.536837 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:03.765841 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:03.808226 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:04.004480 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:04.031288 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:04.036806 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:04.308238 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:04.503603 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:04.531769 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:04.536286 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:04.807785 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:05.004163 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:05.032212 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:05.036501 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:05.307931 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:05.503421 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:05.531766 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:05.535830 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:05.808181 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:06.011199 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:06.031763 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:06.036773 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:06.274551 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:06.307963 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:06.502990 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:06.531239 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:06.537160 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:06.807831 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:07.003629 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:07.032090 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:07.036301 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:07.308182 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:07.502649 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:07.544560 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:07.545518 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:07.807106 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:08.004979 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:08.031418 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:08.035853 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:08.307621 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:08.502939 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:08.531384 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:08.535800 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:08.767255 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:08.808022 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:09.004204 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:09.031338 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:09.035943 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:09.308322 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:09.503556 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:09.531646 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:09.535788 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:09.807932 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:10.004540 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:10.033206 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:10.045405 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:10.308084 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:10.503349 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:10.531302 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:10.536664 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:10.807590 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:11.003791 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:11.031253 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:11.036599 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:11.265107 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:11.308341 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:11.503739 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:11.532128 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:11.536614 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:11.807286 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:12.020799 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:12.034936 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:12.043586 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:12.316444 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:12.502858 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:12.532291 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:12.539740 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:12.807486 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:13.004307 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:13.032349 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:13.037490 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:13.265227 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:13.308051 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:13.503835 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:13.531524 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:13.536139 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:13.807979 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:14.004774 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:14.031534 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:14.036741 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:14.308939 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:14.504559 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:14.533951 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:14.537528 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:14.807920 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:15.003848 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:15.032824 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:15.037374 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:15.308255 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:15.503030 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:15.531238 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:15.536882 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:15.771282 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:15.807861 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:16.003716 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:16.032136 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:16.037053 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:16.308588 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:16.506950 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:16.533353 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:16.538411 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:16.808276 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:17.003902 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:17.032105 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:17.038059 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:17.307848 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:17.503161 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:17.531965 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:17.536547 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:17.807294 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:18.004359 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:18.032368 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:18.037586 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:18.265789 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:18.307492 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:18.503002 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:18.534088 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:18.537527 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:18.808217 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:19.003574 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:19.032063 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:19.039880 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:19.308151 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:19.508801 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:19.531145 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:19.536565 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:19.807126 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:20.004301 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:20.031942 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:20.036378 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:20.307776 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:20.505655 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:20.531646 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:20.535984 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:20.765717 1892116 pod_ready.go:102] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"False"
	I0116 02:56:20.808085 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:21.003758 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:21.031706 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:21.036413 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:21.307410 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:21.503547 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:21.531882 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:21.536016 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:21.765042 1892116 pod_ready.go:92] pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.765069 1892116 pod_ready.go:81] duration metric: took 42.506558132s waiting for pod "coredns-5dd5756b68-drb7k" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.765081 1892116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v67m8" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.767124 1892116 pod_ready.go:97] error getting pod "coredns-5dd5756b68-v67m8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-v67m8" not found
	I0116 02:56:21.767151 1892116 pod_ready.go:81] duration metric: took 2.063826ms waiting for pod "coredns-5dd5756b68-v67m8" in "kube-system" namespace to be "Ready" ...
	E0116 02:56:21.767162 1892116 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-v67m8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-v67m8" not found
	I0116 02:56:21.767168 1892116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.772325 1892116 pod_ready.go:92] pod "etcd-addons-843965" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.772344 1892116 pod_ready.go:81] duration metric: took 5.16846ms waiting for pod "etcd-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.772356 1892116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.777796 1892116 pod_ready.go:92] pod "kube-apiserver-addons-843965" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.777819 1892116 pod_ready.go:81] duration metric: took 5.455221ms waiting for pod "kube-apiserver-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.777829 1892116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.783266 1892116 pod_ready.go:92] pod "kube-controller-manager-addons-843965" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.783287 1892116 pod_ready.go:81] duration metric: took 5.449953ms waiting for pod "kube-controller-manager-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.783299 1892116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shxz5" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.807808 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:21.962643 1892116 pod_ready.go:92] pod "kube-proxy-shxz5" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:21.962667 1892116 pod_ready.go:81] duration metric: took 179.361184ms waiting for pod "kube-proxy-shxz5" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:21.962679 1892116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:22.004210 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:22.032101 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:22.036546 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:22.310413 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:22.364012 1892116 pod_ready.go:92] pod "kube-scheduler-addons-843965" in "kube-system" namespace has status "Ready":"True"
	I0116 02:56:22.364049 1892116 pod_ready.go:81] duration metric: took 401.354806ms waiting for pod "kube-scheduler-addons-843965" in "kube-system" namespace to be "Ready" ...
	I0116 02:56:22.364060 1892116 pod_ready.go:38] duration metric: took 43.119051002s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:56:22.364074 1892116 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:56:22.364153 1892116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:56:22.396113 1892116 api_server.go:72] duration metric: took 43.483283408s to wait for apiserver process to appear ...
	I0116 02:56:22.396142 1892116 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:56:22.396163 1892116 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 02:56:22.405784 1892116 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 02:56:22.407560 1892116 api_server.go:141] control plane version: v1.28.4
	I0116 02:56:22.407581 1892116 api_server.go:131] duration metric: took 11.432092ms to wait for apiserver health ...
	I0116 02:56:22.407590 1892116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:56:22.504296 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:22.532314 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:22.537575 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:22.572609 1892116 system_pods.go:59] 18 kube-system pods found
	I0116 02:56:22.572687 1892116 system_pods.go:61] "coredns-5dd5756b68-drb7k" [5d711312-1d08-44bc-a927-acb57c46dde3] Running
	I0116 02:56:22.572713 1892116 system_pods.go:61] "csi-hostpath-attacher-0" [b33859d8-a06e-47aa-9e5b-b1fa3361b6ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 02:56:22.572734 1892116 system_pods.go:61] "csi-hostpath-resizer-0" [7816225b-e3b5-4636-ae31-ce0ab725df08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0116 02:56:22.572772 1892116 system_pods.go:61] "csi-hostpathplugin-67k8j" [25de6a1f-7131-4fb2-b2c1-6c456d1dcccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:56:22.572800 1892116 system_pods.go:61] "etcd-addons-843965" [7ba09c04-5376-4351-b2fb-069ebaebc3fa] Running
	I0116 02:56:22.572820 1892116 system_pods.go:61] "kindnet-p7psr" [8b6ba9f1-d3da-4a60-a6ce-8dfda33792b7] Running
	I0116 02:56:22.572837 1892116 system_pods.go:61] "kube-apiserver-addons-843965" [33d1e64e-6d09-43a9-9f8e-70a333257907] Running
	I0116 02:56:22.572853 1892116 system_pods.go:61] "kube-controller-manager-addons-843965" [4fbb42f1-b416-4339-a359-8a97f6589e8d] Running
	I0116 02:56:22.572870 1892116 system_pods.go:61] "kube-ingress-dns-minikube" [8122a341-637a-43d1-99b4-4f74cfcb03f0] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 02:56:22.572892 1892116 system_pods.go:61] "kube-proxy-shxz5" [66275fc0-354e-4fbc-b31e-44770af0e751] Running
	I0116 02:56:22.572911 1892116 system_pods.go:61] "kube-scheduler-addons-843965" [befad0b9-f2ab-4a9d-a74b-8968f4d8d4c9] Running
	I0116 02:56:22.572931 1892116 system_pods.go:61] "metrics-server-7c66d45ddc-cshtq" [e2de00b3-dd3e-4347-a94f-b186d7fe0fea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 02:56:22.572948 1892116 system_pods.go:61] "nvidia-device-plugin-daemonset-zlmrk" [da7bd62d-e415-4145-ad12-6feb7be5fe21] Running
	I0116 02:56:22.572963 1892116 system_pods.go:61] "registry-bzgv9" [af30b04d-da1d-4148-b183-4ca8c48dba30] Running
	I0116 02:56:22.572979 1892116 system_pods.go:61] "registry-proxy-sfv97" [224d6c6a-4fbd-415b-92b0-562bdde1b323] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 02:56:22.572999 1892116 system_pods.go:61] "snapshot-controller-58dbcc7b99-kblzs" [6694f41d-1dee-45de-b020-072a9a790144] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:56:22.573017 1892116 system_pods.go:61] "snapshot-controller-58dbcc7b99-vtcnr" [7f950a4d-15d7-43fa-98d2-fec43d16eab9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:56:22.573033 1892116 system_pods.go:61] "storage-provisioner" [253b2950-26ab-4c7d-ae43-bc75f6fd3e61] Running
	I0116 02:56:22.573051 1892116 system_pods.go:74] duration metric: took 165.455302ms to wait for pod list to return data ...
	I0116 02:56:22.573078 1892116 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:56:22.762937 1892116 default_sa.go:45] found service account: "default"
	I0116 02:56:22.763004 1892116 default_sa.go:55] duration metric: took 189.907787ms for default service account to be created ...
	I0116 02:56:22.763028 1892116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:56:22.807812 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:22.971498 1892116 system_pods.go:86] 18 kube-system pods found
	I0116 02:56:22.971572 1892116 system_pods.go:89] "coredns-5dd5756b68-drb7k" [5d711312-1d08-44bc-a927-acb57c46dde3] Running
	I0116 02:56:22.971596 1892116 system_pods.go:89] "csi-hostpath-attacher-0" [b33859d8-a06e-47aa-9e5b-b1fa3361b6ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 02:56:22.971615 1892116 system_pods.go:89] "csi-hostpath-resizer-0" [7816225b-e3b5-4636-ae31-ce0ab725df08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0116 02:56:22.971651 1892116 system_pods.go:89] "csi-hostpathplugin-67k8j" [25de6a1f-7131-4fb2-b2c1-6c456d1dcccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 02:56:22.971673 1892116 system_pods.go:89] "etcd-addons-843965" [7ba09c04-5376-4351-b2fb-069ebaebc3fa] Running
	I0116 02:56:22.971690 1892116 system_pods.go:89] "kindnet-p7psr" [8b6ba9f1-d3da-4a60-a6ce-8dfda33792b7] Running
	I0116 02:56:22.971706 1892116 system_pods.go:89] "kube-apiserver-addons-843965" [33d1e64e-6d09-43a9-9f8e-70a333257907] Running
	I0116 02:56:22.971720 1892116 system_pods.go:89] "kube-controller-manager-addons-843965" [4fbb42f1-b416-4339-a359-8a97f6589e8d] Running
	I0116 02:56:22.971750 1892116 system_pods.go:89] "kube-ingress-dns-minikube" [8122a341-637a-43d1-99b4-4f74cfcb03f0] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 02:56:22.971772 1892116 system_pods.go:89] "kube-proxy-shxz5" [66275fc0-354e-4fbc-b31e-44770af0e751] Running
	I0116 02:56:22.971790 1892116 system_pods.go:89] "kube-scheduler-addons-843965" [befad0b9-f2ab-4a9d-a74b-8968f4d8d4c9] Running
	I0116 02:56:22.971809 1892116 system_pods.go:89] "metrics-server-7c66d45ddc-cshtq" [e2de00b3-dd3e-4347-a94f-b186d7fe0fea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 02:56:22.971825 1892116 system_pods.go:89] "nvidia-device-plugin-daemonset-zlmrk" [da7bd62d-e415-4145-ad12-6feb7be5fe21] Running
	I0116 02:56:22.971851 1892116 system_pods.go:89] "registry-bzgv9" [af30b04d-da1d-4148-b183-4ca8c48dba30] Running
	I0116 02:56:22.971876 1892116 system_pods.go:89] "registry-proxy-sfv97" [224d6c6a-4fbd-415b-92b0-562bdde1b323] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 02:56:22.971898 1892116 system_pods.go:89] "snapshot-controller-58dbcc7b99-kblzs" [6694f41d-1dee-45de-b020-072a9a790144] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:56:22.971917 1892116 system_pods.go:89] "snapshot-controller-58dbcc7b99-vtcnr" [7f950a4d-15d7-43fa-98d2-fec43d16eab9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 02:56:22.971931 1892116 system_pods.go:89] "storage-provisioner" [253b2950-26ab-4c7d-ae43-bc75f6fd3e61] Running
	I0116 02:56:22.971959 1892116 system_pods.go:126] duration metric: took 208.913176ms to wait for k8s-apps to be running ...
	I0116 02:56:22.971983 1892116 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:56:22.972061 1892116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:56:22.988541 1892116 system_svc.go:56] duration metric: took 16.549968ms WaitForService to wait for kubelet.
	I0116 02:56:22.988612 1892116 kubeadm.go:581] duration metric: took 44.075787238s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:56:22.988647 1892116 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:56:23.004011 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:23.032432 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:23.036559 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:23.163347 1892116 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 02:56:23.163424 1892116 node_conditions.go:123] node cpu capacity is 2
	I0116 02:56:23.163450 1892116 node_conditions.go:105] duration metric: took 174.786505ms to run NodePressure ...
	I0116 02:56:23.163473 1892116 start.go:228] waiting for startup goroutines ...
	I0116 02:56:23.308514 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:23.503305 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:23.534065 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:23.537552 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:23.807523 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:24.010218 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:24.034383 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:24.038011 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:56:24.311673 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:24.507898 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:24.532487 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:24.536231 1892116 kapi.go:107] duration metric: took 38.004828742s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 02:56:24.807881 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:25.003305 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:25.031707 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:25.308429 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:25.503624 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:25.532222 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:25.808110 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:26.005779 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:26.032369 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:26.308738 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:26.503150 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:26.531416 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:26.810271 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:27.004222 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:27.032144 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:27.308178 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:27.504134 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:27.531907 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:27.807714 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:28.005048 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:28.032162 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:28.308545 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:28.507871 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:28.534406 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:28.807670 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:29.004522 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:29.033411 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:29.308397 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:29.503569 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:29.531727 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:29.807065 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:30.003721 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:30.032549 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:30.307820 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:30.503873 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:30.532916 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:30.810334 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:31.004901 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:31.032321 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:31.308063 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:31.503703 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:31.532288 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:31.807999 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:32.006154 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:32.031357 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:32.308606 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:32.505347 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:32.532164 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:32.808035 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:33.004191 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:33.031599 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:33.307548 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:33.504467 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:33.532875 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:33.807885 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:34.005157 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:34.031835 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:34.307950 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:34.507069 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:34.532278 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:34.808240 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:35.003674 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:35.031450 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:35.308292 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:35.503828 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:35.537520 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:35.810999 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:36.005009 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:36.031949 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:36.307968 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:36.503613 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:36.531609 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:36.807389 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:37.003919 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:37.031833 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:37.307530 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:37.502806 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:37.532038 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:37.807666 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:38.004182 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:38.031685 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:38.307477 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:38.505857 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:38.533980 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:38.808114 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:39.007792 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:39.039518 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:39.308148 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:39.504532 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:39.532859 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:39.814582 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:40.005856 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:40.037687 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:40.308042 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:40.503538 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:40.533190 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:40.808171 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:41.004456 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:41.034139 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:41.308875 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:41.503998 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:41.532349 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:41.808591 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:42.005070 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:42.033809 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:42.309046 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:42.507861 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:42.532264 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:42.812503 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:43.004573 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:43.031360 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:43.308395 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:43.502907 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:43.531851 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:43.807466 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:44.003926 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:44.031453 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:44.307475 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:44.503051 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:56:44.531044 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:44.807681 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:45.007608 1892116 kapi.go:107] duration metric: took 57.013321783s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 02:56:45.032242 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:45.308282 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:45.531879 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:45.807543 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:46.031551 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:46.307421 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:46.531980 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:46.807947 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:47.032313 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:47.308095 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:47.531867 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:47.807393 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:48.032058 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:48.307985 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:48.531614 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:48.807151 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:49.032183 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:49.308251 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:49.532010 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:49.807790 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:50.031774 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:50.307602 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:50.530929 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:50.807698 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:51.031415 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:51.307261 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:51.532479 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:51.808707 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:52.031638 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:52.319027 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:52.535123 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:52.808269 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:53.032021 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:53.308421 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:53.532097 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:53.807789 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:54.032373 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:54.324999 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:54.532713 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:54.807818 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:55.032366 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:55.311667 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:55.531642 1892116 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:56:55.807040 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:56.031823 1892116 kapi.go:107] duration metric: took 1m9.505074163s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 02:56:56.311089 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:56.809976 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:57.307551 1892116 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:56:57.807216 1892116 kapi.go:107] duration metric: took 1m8.503474131s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 02:56:57.809429 1892116 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-843965 cluster.
	I0116 02:56:57.811509 1892116 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 02:56:57.813216 1892116 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 02:56:57.815224 1892116 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0116 02:56:57.817099 1892116 addons.go:505] enable addons completed in 1m19.443961092s: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0116 02:56:57.817137 1892116 start.go:233] waiting for cluster config update ...
	I0116 02:56:57.817169 1892116 start.go:242] writing updated cluster config ...
	I0116 02:56:57.817530 1892116 ssh_runner.go:195] Run: rm -f paused
	I0116 02:56:58.157405 1892116 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:56:58.159259 1892116 out.go:177] * Done! kubectl is now configured to use "addons-843965" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	30eb4e38c6ec1       45e33ff5627be       41 seconds ago       Exited              gadget                       4                   cbba5317ddbb3       gadget-j8jtx
	549962d74f6a9       fc9db2894f4e4       50 seconds ago       Exited              helper-pod                   0                   84693f13d6c5f       helper-pod-delete-pvc-7b134c94-38a8-4396-b5f8-502ac0f0b814
	ff86106dec4b9       23466caa55cb7       53 seconds ago       Exited              busybox                      0                   8083d0973d78a       test-local-path
	290bcf3124d78       fc9db2894f4e4       57 seconds ago       Exited              helper-pod                   0                   f43bfe89a1d14       helper-pod-create-pvc-7b134c94-38a8-4396-b5f8-502ac0f0b814
	c9b36782cf014       1499ed4fbd0aa       About a minute ago   Exited              minikube-ingress-dns         4                   c912699ad75c5       kube-ingress-dns-minikube
	43a9d9f2634b7       2a5f29343eb03       About a minute ago   Running             gcp-auth                     0                   a4ccda24f38d7       gcp-auth-d4c87556c-2m7mf
	14451960e63ec       b2e1c763f63b9       About a minute ago   Running             controller                   0                   85d55e5a9fdae       ingress-nginx-controller-69cff4fd79-6rwpz
	b915a048f4ada       af594c6a879f2       About a minute ago   Exited              patch                        2                   57f7ea89952f1       ingress-nginx-admission-patch-75m88
	fa71d0e6c2150       24087ab2d9047       About a minute ago   Running             metrics-server               0                   cce745584637d       metrics-server-7c66d45ddc-cshtq
	71838e965dea9       af594c6a879f2       About a minute ago   Exited              create                       0                   2aefb233d14d0       ingress-nginx-admission-create-r6qf8
	8eda7b5dabc30       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   eea5ff6f92d2f       snapshot-controller-58dbcc7b99-kblzs
	86fb839a1af30       20e3f2db01e81       About a minute ago   Running             yakd                         0                   940980a2d7bb2       yakd-dashboard-9947fc6bf-ccdsg
	b43bab1a74319       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   8e536f1a25300       snapshot-controller-58dbcc7b99-vtcnr
	819369e220a77       97e04611ad434       2 minutes ago        Running             coredns                      0                   f7213821037ce       coredns-5dd5756b68-drb7k
	4b7c69e163454       a89778274bf53       2 minutes ago        Running             cloud-spanner-emulator       0                   b4f3fa1ceabee       cloud-spanner-emulator-64c8c85f65-rw7hq
	52cc6edb069f5       ba04bb24b9575       2 minutes ago        Running             storage-provisioner          0                   5344b6e3d6abc       storage-provisioner
	653b92beb0f55       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                   0                   90409322275c0       kube-proxy-shxz5
	cad3dfa1ad9e7       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni                  0                   01cb52cc88d4f       kindnet-p7psr
	51c33b06e0ddb       04b4c447bb9d4       3 minutes ago        Running             kube-apiserver               0                   a8c4e5ba61743       kube-apiserver-addons-843965
	ce7400afe9ca1       9cdd6470f48c8       3 minutes ago        Running             etcd                         0                   9e31239605ff1       etcd-addons-843965
	79153b07155cf       05c284c929889       3 minutes ago        Running             kube-scheduler               0                   391282c0e098f       kube-scheduler-addons-843965
	7d7aa230689f6       9961cbceaf234       3 minutes ago        Running             kube-controller-manager      0                   c9673fca8da20       kube-controller-manager-addons-843965
	
	
	==> containerd <==
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.127748018Z" level=error msg="ContainerStatus for \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.128558629Z" level=error msg="ContainerStatus for \"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.129231356Z" level=error msg="ContainerStatus for \"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.129931241Z" level=error msg="ContainerStatus for \"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.130643631Z" level=error msg="ContainerStatus for \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.131387626Z" level=error msg="ContainerStatus for \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.132140778Z" level=error msg="ContainerStatus for \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.132781826Z" level=error msg="ContainerStatus for \"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.133397021Z" level=error msg="ContainerStatus for \"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.134036485Z" level=error msg="ContainerStatus for \"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.134651811Z" level=error msg="ContainerStatus for \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.135450689Z" level=error msg="ContainerStatus for \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.136161126Z" level=error msg="ContainerStatus for \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.136758229Z" level=error msg="ContainerStatus for \"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.137404446Z" level=error msg="ContainerStatus for \"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.138103749Z" level=error msg="ContainerStatus for \"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.138691843Z" level=error msg="ContainerStatus for \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.139378773Z" level=error msg="ContainerStatus for \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.139979551Z" level=error msg="ContainerStatus for \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.141684979Z" level=info msg="RemoveContainer for \"1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09\""
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.147128778Z" level=info msg="RemoveContainer for \"1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09\" returns successfully"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.147891447Z" level=error msg="ContainerStatus for \"1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09\": not found"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.149736891Z" level=info msg="RemoveContainer for \"afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41\""
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.156975157Z" level=info msg="RemoveContainer for \"afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41\" returns successfully"
	Jan 16 02:58:22 addons-843965 containerd[743]: time="2024-01-16T02:58:22.157963855Z" level=error msg="ContainerStatus for \"afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41\": not found"
	
	
	==> coredns [819369e220a77836381659af98ccbc1c8ba4bdb0605819cc2b3e988ad6c2c214] <==
	[INFO] 10.244.0.5:56262 - 26906 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002162112s
	[INFO] 10.244.0.5:48322 - 42457 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133469s
	[INFO] 10.244.0.5:48322 - 14043 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101831s
	[INFO] 10.244.0.5:43268 - 10414 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080465s
	[INFO] 10.244.0.5:43268 - 22435 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070611s
	[INFO] 10.244.0.5:38237 - 17218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114196s
	[INFO] 10.244.0.5:38237 - 40513 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076404s
	[INFO] 10.244.0.5:37741 - 44733 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101683s
	[INFO] 10.244.0.5:37741 - 48051 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038965s
	[INFO] 10.244.0.5:46074 - 43379 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001298654s
	[INFO] 10.244.0.5:46074 - 57469 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001032405s
	[INFO] 10.244.0.5:34907 - 33486 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00007674s
	[INFO] 10.244.0.5:34907 - 37064 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050001s
	[INFO] 10.244.0.20:49981 - 25185 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151619s
	[INFO] 10.244.0.20:49102 - 47023 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000168053s
	[INFO] 10.244.0.20:35320 - 4918 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122523s
	[INFO] 10.244.0.20:38832 - 11079 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119078s
	[INFO] 10.244.0.20:46935 - 41041 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098483s
	[INFO] 10.244.0.20:37882 - 9561 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115623s
	[INFO] 10.244.0.20:48439 - 33109 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001945577s
	[INFO] 10.244.0.20:50616 - 56069 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002383907s
	[INFO] 10.244.0.20:57294 - 58698 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001663115s
	[INFO] 10.244.0.20:54656 - 40690 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.0021102s
	[INFO] 10.244.0.21:53857 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000236547s
	[INFO] 10.244.0.21:53228 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130408s
	
	
	==> describe nodes <==
	Name:               addons-843965
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-843965
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=addons-843965
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_55_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-843965
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:55:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-843965
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:58:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:57:58 +0000   Tue, 16 Jan 2024 02:55:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:57:58 +0000   Tue, 16 Jan 2024 02:55:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:57:58 +0000   Tue, 16 Jan 2024 02:55:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:57:58 +0000   Tue, 16 Jan 2024 02:55:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-843965
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 394a8448e81541d28502617e7d0fe1a2
	  System UUID:                6654ebef-dec4-4c43-b98e-a42300b6aa2b
	  Boot ID:                    db337b58-1f53-411c-9ff2-b8ff3dd0911c
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-rw7hq      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  gadget                      gadget-j8jtx                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  gcp-auth                    gcp-auth-d4c87556c-2m7mf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-6rwpz    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         2m37s
	  kube-system                 coredns-5dd5756b68-drb7k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m45s
	  kube-system                 etcd-addons-843965                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m57s
	  kube-system                 kindnet-p7psr                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m45s
	  kube-system                 kube-apiserver-addons-843965                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-controller-manager-addons-843965        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-proxy-shxz5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-scheduler-addons-843965                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                 metrics-server-7c66d45ddc-cshtq              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m40s
	  kube-system                 snapshot-controller-58dbcc7b99-kblzs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 snapshot-controller-58dbcc7b99-vtcnr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-ccdsg               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m43s  kube-proxy       
	  Normal  Starting                 2m57s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m57s  kubelet          Node addons-843965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s  kubelet          Node addons-843965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s  kubelet          Node addons-843965 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m57s  kubelet          Node addons-843965 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m57s  kubelet          Node addons-843965 status is now: NodeReady
	  Normal  RegisteredNode           2m46s  node-controller  Node addons-843965 event: Registered Node addons-843965 in Controller
	
	
	==> dmesg <==
	[  +0.001077] FS-Cache: O-key=[8] '44dac90000000000'
	[  +0.000784] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000986] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000001afef535
	[  +0.001069] FS-Cache: N-key=[8] '44dac90000000000'
	[  +0.002505] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=00000071 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001002] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=000000002d197214
	[  +0.001094] FS-Cache: O-key=[8] '44dac90000000000'
	[  +0.000798] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001128] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000008d08245a
	[  +0.001092] FS-Cache: N-key=[8] '44dac90000000000'
	[  +2.139529] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=00000000136c64d0
	[  +0.001228] FS-Cache: O-key=[8] '43dac90000000000'
	[  +0.000720] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000007a0384fc
	[  +0.001092] FS-Cache: N-key=[8] '43dac90000000000'
	[  +0.318695] FS-Cache: Duplicate cookie detected
	[  +0.000817] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001202] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=00000000d8eb70b5
	[  +0.001211] FS-Cache: O-key=[8] '49dac90000000000'
	[  +0.000849] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.001037] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000001afef535
	[  +0.001307] FS-Cache: N-key=[8] '49dac90000000000'
	
	
	==> etcd [ce7400afe9ca1bff290931ece2139aa0de0fa0a1da85e8e691fdc4b690da7d05] <==
	{"level":"info","ts":"2024-01-16T02:55:19.094578Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T02:55:19.094593Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-16T02:55:19.095157Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-16T02:55:19.094026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-16T02:55:19.095493Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-16T02:55:19.096236Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T02:55:19.096363Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T02:55:19.929475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T02:55:19.929708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T02:55:19.929817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-16T02:55:19.930008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T02:55:19.930126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-16T02:55:19.930213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-16T02:55:19.930294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-16T02:55:19.933577Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:55:19.937681Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-843965 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T02:55:19.937865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:55:19.938975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T02:55:19.939357Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:55:19.940334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-16T02:55:19.951508Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:55:19.954622Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:55:19.954796Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:55:19.993511Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T02:55:19.993706Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [43a9d9f2634b7279fd0821a0cae509a696927fd7ccd31796cbc80e9bdcd4e301] <==
	2024/01/16 02:56:57 GCP Auth Webhook started!
	2024/01/16 02:57:09 Ready to marshal response ...
	2024/01/16 02:57:09 Ready to write response ...
	2024/01/16 02:57:24 Ready to marshal response ...
	2024/01/16 02:57:24 Ready to write response ...
	2024/01/16 02:57:25 Ready to marshal response ...
	2024/01/16 02:57:25 Ready to write response ...
	2024/01/16 02:57:32 Ready to marshal response ...
	2024/01/16 02:57:32 Ready to write response ...
	2024/01/16 02:57:47 Ready to marshal response ...
	2024/01/16 02:57:47 Ready to write response ...
	2024/01/16 02:58:11 Ready to marshal response ...
	2024/01/16 02:58:11 Ready to write response ...
	
	
	==> kernel <==
	 02:58:23 up  9:40,  0 users,  load average: 1.29, 1.70, 2.08
	Linux addons-843965 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [cad3dfa1ad9e704d8beed303439c3b4ab3b0ba0d46fa9b4768d8e3deeb2aea88] <==
	I0116 02:56:19.786551       1 main.go:227] handling current node
	I0116 02:56:29.798700       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:56:29.798725       1 main.go:227] handling current node
	I0116 02:56:39.803945       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:56:39.803986       1 main.go:227] handling current node
	I0116 02:56:49.814016       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:56:49.814042       1 main.go:227] handling current node
	I0116 02:56:59.825493       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:56:59.825594       1 main.go:227] handling current node
	I0116 02:57:09.838380       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:09.838407       1 main.go:227] handling current node
	I0116 02:57:19.849253       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:19.849282       1 main.go:227] handling current node
	I0116 02:57:29.861150       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:29.861183       1 main.go:227] handling current node
	I0116 02:57:39.871472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:39.871499       1 main.go:227] handling current node
	I0116 02:57:49.884452       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:49.884476       1 main.go:227] handling current node
	I0116 02:57:59.898291       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:57:59.898412       1 main.go:227] handling current node
	I0116 02:58:09.902294       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:58:09.902323       1 main.go:227] handling current node
	I0116 02:58:19.914891       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 02:58:19.914919       1 main.go:227] handling current node
	
	
	==> kube-apiserver [51c33b06e0ddb509cf60ffeb56a310ca8f81bb4fccf327d00b9cd387c3c34398] <==
	I0116 02:55:49.136428       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.98.132.166"}
	I0116 02:56:22.442933       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0116 02:56:44.168702       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.124.61:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.124.61:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.124.61:443: connect: connection refused
	W0116 02:56:44.168830       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 02:56:44.168881       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0116 02:56:44.174875       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.124.61:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.124.61:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.124.61:443: connect: connection refused
	W0116 02:56:44.710610       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 02:56:44.710657       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 02:56:44.710665       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 02:56:44.711756       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 02:56:44.711826       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 02:56:44.711839       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 02:56:49.175287       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 02:56:49.181130       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 02:56:49.181481       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.124.61:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.124.61:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	E0116 02:56:49.183592       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 02:56:49.239632       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 02:56:49.246694       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 02:57:22.446205       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0116 02:57:48.760298       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0116 02:57:58.801179       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0116 02:58:22.446258       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [7d7aa230689f6e65489805a395467f62456ad976f34a28bdf34d0c0011948874] <==
	I0116 02:56:57.692094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="17.777608ms"
	I0116 02:56:57.692275       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="64.318µs"
	I0116 02:56:58.485561       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:57:04.026771       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 02:57:04.063862       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 02:57:07.009931       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 02:57:07.034991       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 02:57:07.448256       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:57:10.362867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="25.561319ms"
	I0116 02:57:10.363094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="191.092µs"
	I0116 02:57:13.774906       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="7.623µs"
	I0116 02:57:22.448346       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:57:24.814681       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0116 02:57:24.988158       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:57:24.988221       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:57:33.582253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="100.838µs"
	I0116 02:57:37.449601       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:57:47.259964       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:58:01.291618       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:58:01.291805       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:58:07.451090       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:58:10.545585       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 02:58:21.071771       1 namespace_controller.go:182] "Namespace has been deleted" namespace="local-path-storage"
	I0116 02:58:21.154240       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0116 02:58:21.282892       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	
	
	==> kube-proxy [653b92beb0f55c90a6fc42be3424ed34e624a629bea0fae97ea010f8006e8815] <==
	I0116 02:55:39.506662       1 server_others.go:69] "Using iptables proxy"
	I0116 02:55:39.559217       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0116 02:55:39.605668       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0116 02:55:39.608204       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:55:39.608243       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0116 02:55:39.608252       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0116 02:55:39.608297       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:55:39.608520       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:55:39.608530       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:55:39.609630       1 config.go:188] "Starting service config controller"
	I0116 02:55:39.609668       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:55:39.609687       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:55:39.609695       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:55:39.611404       1 config.go:315] "Starting node config controller"
	I0116 02:55:39.611418       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:55:39.709902       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:55:39.709954       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:55:39.711554       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [79153b07155cf33f0f0fda110c5ea9d3a1f2e3c7f10d052d62d439b265cadc46] <==
	W0116 02:55:23.604486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:55:23.604507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 02:55:23.604641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:55:23.604767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 02:55:23.604741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:55:23.604880       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 02:55:23.611888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:55:23.612453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:55:23.611975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:55:23.612713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:55:23.612025       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:55:23.612817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:55:23.612113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:55:23.612896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 02:55:23.612164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:55:23.612969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:55:23.612309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:55:23.613052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:55:23.612354       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:55:23.613161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 02:55:23.612416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:55:23.613245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:55:23.612686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 02:55:23.613332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0116 02:55:24.696625       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.134960    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d"} err="failed to get container status \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.135100    1340 scope.go:117] "RemoveContainer" containerID="47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.135737    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438"} err="failed to get container status \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\": rpc error: code = NotFound desc = an error occurred when try to find container \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.135861    1340 scope.go:117] "RemoveContainer" containerID="dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.136405    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59"} err="failed to get container status \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\": rpc error: code = NotFound desc = an error occurred when try to find container \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.136523    1340 scope.go:117] "RemoveContainer" containerID="f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.137052    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe"} err="failed to get container status \"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"f33510b13d22bae7784e28155911e21468aceede2d59598df418a33ab2d526fe\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.137174    1340 scope.go:117] "RemoveContainer" containerID="0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.137671    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405"} err="failed to get container status \"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c6a0d005c3e94e7fb9c44984bfa564def38af4318aa856ec1b3b9be7fff2405\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.137811    1340 scope.go:117] "RemoveContainer" containerID="3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.138343    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad"} err="failed to get container status \"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d930891e4e148ca0f99f5f3ac5bd2221a1d57b2868c9eec2d4cda7224f930ad\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.138460    1340 scope.go:117] "RemoveContainer" containerID="cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.138948    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d"} err="failed to get container status \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd946251c5f6887cab73cf24abb8f9d0a3bc44123c104dc8bb21f7cfb36b4f5d\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.139094    1340 scope.go:117] "RemoveContainer" containerID="47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.139629    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438"} err="failed to get container status \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\": rpc error: code = NotFound desc = an error occurred when try to find container \"47b2e9515c9aedbf46a67f35ccfcf477387c22ad81b99859e8b2b51063aba438\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.139748    1340 scope.go:117] "RemoveContainer" containerID="dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.140269    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59"} err="failed to get container status \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\": rpc error: code = NotFound desc = an error occurred when try to find container \"dae36404a78089eeaaaeb311ec3be44eaa08a2a83f84ecea2c7caa6b21de1c59\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.140375    1340 scope.go:117] "RemoveContainer" containerID="1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.147549    1340 scope.go:117] "RemoveContainer" containerID="1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: E0116 02:58:22.148202    1340 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09\": not found" containerID="1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.148263    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09"} err="failed to get container status \"1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c28049e5ea99b4fb82236318d05aa79cdf7e21b29d5043bccde3eb359085b09\": not found"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.148278    1340 scope.go:117] "RemoveContainer" containerID="afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.157533    1340 scope.go:117] "RemoveContainer" containerID="afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: E0116 02:58:22.158302    1340 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41\": not found" containerID="afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41"
	Jan 16 02:58:22 addons-843965 kubelet[1340]: I0116 02:58:22.158354    1340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41"} err="failed to get container status \"afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41\": rpc error: code = NotFound desc = an error occurred when try to find container \"afef3455be17ec5aadc8198f1b4057f47e166b4b240ac321f2fb0023ae212a41\": not found"
	
	
	==> storage-provisioner [52cc6edb069f5ef20c0e1aad56d892e4804d050c4b15c380e01c9531cd31f778] <==
	I0116 02:55:44.382026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:55:44.416843       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:55:44.416950       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:55:44.433293       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:55:44.435186       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-843965_5dbe9549-f38f-487e-be21-e9fddd196f3e!
	I0116 02:55:44.440583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ad83ebc-61b6-482a-a784-f8e0ed412c1a", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-843965_5dbe9549-f38f-487e-be21-e9fddd196f3e became leader
	I0116 02:55:44.535941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-843965_5dbe9549-f38f-487e-be21-e9fddd196f3e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-843965 -n addons-843965
helpers_test.go:261: (dbg) Run:  kubectl --context addons-843965 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-r6qf8 ingress-nginx-admission-patch-75m88
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-843965 describe pod ingress-nginx-admission-create-r6qf8 ingress-nginx-admission-patch-75m88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-843965 describe pod ingress-nginx-admission-create-r6qf8 ingress-nginx-admission-patch-75m88: exit status 1 (83.648097ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r6qf8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-75m88" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-843965 describe pod ingress-nginx-admission-create-r6qf8 ingress-nginx-admission-patch-75m88: exit status 1
--- FAIL: TestAddons/parallel/CloudSpanner (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image load --daemon gcr.io/google-containers/addon-resizer:functional-060112 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 image load --daemon gcr.io/google-containers/addon-resizer:functional-060112 --alsologtostderr: (4.189075897s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-060112" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image load --daemon gcr.io/google-containers/addon-resizer:functional-060112 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 image load --daemon gcr.io/google-containers/addon-resizer:functional-060112 --alsologtostderr: (3.214497481s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-060112" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.156997504s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-060112
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image load --daemon gcr.io/google-containers/addon-resizer:functional-060112 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 image load --daemon gcr.io/google-containers/addon-resizer:functional-060112 --alsologtostderr: (3.142308506s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-060112" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image save gcr.io/google-containers/addon-resizer:functional-060112 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0116 03:04:25.815624 1924947 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:04:25.816194 1924947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:04:25.816207 1924947 out.go:309] Setting ErrFile to fd 2...
	I0116 03:04:25.816213 1924947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:04:25.816527 1924947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 03:04:25.817179 1924947 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:04:25.817337 1924947 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:04:25.818028 1924947 cli_runner.go:164] Run: docker container inspect functional-060112 --format={{.State.Status}}
	I0116 03:04:25.836434 1924947 ssh_runner.go:195] Run: systemctl --version
	I0116 03:04:25.836554 1924947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060112
	I0116 03:04:25.855229 1924947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35038 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/functional-060112/id_rsa Username:docker}
	I0116 03:04:25.955331 1924947 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0116 03:04:25.955415 1924947 cache_images.go:254] Failed to load cached images for profile functional-060112. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0116 03:04:25.955433 1924947 cache_images.go:262] succeeded pushing to: 
	I0116 03:04:25.955439 1924947 cache_images.go:263] failed pushing to: functional-060112

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (55.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-846462 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-846462 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.357804407s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-846462 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-846462 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9e86e0ef-b744-4df0-9fee-821c909a2b8c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9e86e0ef-b744-4df0-9fee-821c909a2b8c] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.003233715s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-846462 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-846462 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-846462 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.019781348s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-846462 addons disable ingress-dns --alsologtostderr -v=1
E0116 03:06:58.180378 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-846462 addons disable ingress-dns --alsologtostderr -v=1: (8.078905867s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-846462 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-846462 addons disable ingress --alsologtostderr -v=1: (7.562294389s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-846462
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-846462:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ad7602dde918a86444890c30508ff5eb514224783391b64173826eaa9f5b0cc",
	        "Created": "2024-01-16T03:04:54.480064945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1926088,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T03:04:54.786595752Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/5ad7602dde918a86444890c30508ff5eb514224783391b64173826eaa9f5b0cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ad7602dde918a86444890c30508ff5eb514224783391b64173826eaa9f5b0cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ad7602dde918a86444890c30508ff5eb514224783391b64173826eaa9f5b0cc/hosts",
	        "LogPath": "/var/lib/docker/containers/5ad7602dde918a86444890c30508ff5eb514224783391b64173826eaa9f5b0cc/5ad7602dde918a86444890c30508ff5eb514224783391b64173826eaa9f5b0cc-json.log",
	        "Name": "/ingress-addon-legacy-846462",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-846462:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-846462",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8b1dc398d1edbf4de558bcb679f85da46b36fca6385cccac453bbcdd515e460d-init/diff:/var/lib/docker/overlay2/261e7c2ec33123e281bd6870ab3b0bda4a6870d39bd5f5e877084941df0b6b78/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b1dc398d1edbf4de558bcb679f85da46b36fca6385cccac453bbcdd515e460d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b1dc398d1edbf4de558bcb679f85da46b36fca6385cccac453bbcdd515e460d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b1dc398d1edbf4de558bcb679f85da46b36fca6385cccac453bbcdd515e460d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-846462",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-846462/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-846462",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-846462",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-846462",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85422b1f4740261c22d6174a8e541262cbd49ad06689cbdb4ce84339db0f37a4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35042"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35039"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35041"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35040"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/85422b1f4740",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-846462": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ad7602dde91",
	                        "ingress-addon-legacy-846462"
	                    ],
	                    "NetworkID": "2da8e9068d1168ff3f5dbc35f4cb70e4c0a88988f2673f315fbb4e8ce9b5d364",
	                    "EndpointID": "b75dfcbe8ba245535098af8b8d9012331c13ca3de0e186c7706d1a6d36937a52",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-846462 -n ingress-addon-legacy-846462
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-846462 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-846462 logs -n 25: (1.437726459s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-060112 image ls                                                   | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	| image   | functional-060112 image load --daemon                                        | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-060112                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112 image ls                                                   | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	| image   | functional-060112 image load --daemon                                        | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-060112                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112 image ls                                                   | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	| image   | functional-060112 image save                                                 | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-060112                     |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112 image rm                                                   | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-060112                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112 image ls                                                   | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	| image   | functional-060112 image load                                                 | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112 image save --daemon                                        | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-060112                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112                                                            | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112                                                            | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112                                                            | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | image ls --format json                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-060112                                                            | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | image ls --format table                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-060112 ssh pgrep                                                  | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-060112 image build -t                                             | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	|         | localhost/my-image:functional-060112                                         |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image   | functional-060112 image ls                                                   | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	| delete  | -p functional-060112                                                         | functional-060112           | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:04 UTC |
	| start   | -p ingress-addon-legacy-846462                                               | ingress-addon-legacy-846462 | jenkins | v1.32.0 | 16 Jan 24 03:04 UTC | 16 Jan 24 03:06 UTC |
	|         | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=containerd                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-846462                                                  | ingress-addon-legacy-846462 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | addons enable ingress                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-846462                                                  | ingress-addon-legacy-846462 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-846462                                                  | ingress-addon-legacy-846462 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	|         | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-846462 ip                                               | ingress-addon-legacy-846462 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:06 UTC |
	| addons  | ingress-addon-legacy-846462                                                  | ingress-addon-legacy-846462 | jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:07 UTC |
	|         | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-846462                                                  | ingress-addon-legacy-846462 | jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:07 UTC |
	|         | addons disable ingress                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:04:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:04:32.892878 1925638 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:04:32.893039 1925638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:04:32.893049 1925638 out.go:309] Setting ErrFile to fd 2...
	I0116 03:04:32.893055 1925638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:04:32.893331 1925638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 03:04:32.893816 1925638 out.go:303] Setting JSON to false
	I0116 03:04:32.894711 1925638 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35209,"bootTime":1705339064,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 03:04:32.894792 1925638 start.go:138] virtualization:  
	I0116 03:04:32.897669 1925638 out.go:177] * [ingress-addon-legacy-846462] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:04:32.900144 1925638 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:04:32.900292 1925638 notify.go:220] Checking for updates...
	I0116 03:04:32.902250 1925638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:04:32.904394 1925638 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 03:04:32.906273 1925638 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 03:04:32.908012 1925638 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:04:32.909931 1925638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:04:32.912522 1925638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:04:32.936478 1925638 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:04:32.936602 1925638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:04:33.022043 1925638 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-16 03:04:33.011809061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:04:33.022157 1925638 docker.go:295] overlay module found
	I0116 03:04:33.024235 1925638 out.go:177] * Using the docker driver based on user configuration
	I0116 03:04:33.026118 1925638 start.go:298] selected driver: docker
	I0116 03:04:33.026142 1925638 start.go:902] validating driver "docker" against <nil>
	I0116 03:04:33.026156 1925638 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:04:33.026797 1925638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:04:33.094720 1925638 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-16 03:04:33.085451067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:04:33.094875 1925638 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:04:33.095219 1925638 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:04:33.097394 1925638 out.go:177] * Using Docker driver with root privileges
	I0116 03:04:33.099356 1925638 cni.go:84] Creating CNI manager for ""
	I0116 03:04:33.099380 1925638 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 03:04:33.099393 1925638 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 03:04:33.099410 1925638 start_flags.go:321] config:
	{Name:ingress-addon-legacy-846462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-846462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:04:33.101745 1925638 out.go:177] * Starting control plane node ingress-addon-legacy-846462 in cluster ingress-addon-legacy-846462
	I0116 03:04:33.103958 1925638 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0116 03:04:33.106362 1925638 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 03:04:33.108541 1925638 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0116 03:04:33.108628 1925638 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 03:04:33.126552 1925638 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 03:04:33.126578 1925638 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 03:04:33.172057 1925638 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0116 03:04:33.172094 1925638 cache.go:56] Caching tarball of preloaded images
	I0116 03:04:33.172266 1925638 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0116 03:04:33.174808 1925638 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0116 03:04:33.176872 1925638 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0116 03:04:33.297542 1925638 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0116 03:04:46.535514 1925638 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0116 03:04:46.535627 1925638 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0116 03:04:47.804023 1925638 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0116 03:04:47.804422 1925638 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/config.json ...
	I0116 03:04:47.804456 1925638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/config.json: {Name:mka3d7a2e533740941e8d228d0624e12baacf6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:04:47.805334 1925638 cache.go:194] Successfully downloaded all kic artifacts
	I0116 03:04:47.805404 1925638 start.go:365] acquiring machines lock for ingress-addon-legacy-846462: {Name:mk3a7eaee5192a8255e9c79e74d7fe2840c86dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:04:47.805501 1925638 start.go:369] acquired machines lock for "ingress-addon-legacy-846462" in 76.289µs
	I0116 03:04:47.805537 1925638 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-846462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-846462 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 03:04:47.805614 1925638 start.go:125] createHost starting for "" (driver="docker")
	I0116 03:04:47.808364 1925638 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0116 03:04:47.808616 1925638 start.go:159] libmachine.API.Create for "ingress-addon-legacy-846462" (driver="docker")
	I0116 03:04:47.808641 1925638 client.go:168] LocalClient.Create starting
	I0116 03:04:47.808721 1925638 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem
	I0116 03:04:47.808756 1925638 main.go:141] libmachine: Decoding PEM data...
	I0116 03:04:47.808776 1925638 main.go:141] libmachine: Parsing certificate...
	I0116 03:04:47.808832 1925638 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem
	I0116 03:04:47.808854 1925638 main.go:141] libmachine: Decoding PEM data...
	I0116 03:04:47.808869 1925638 main.go:141] libmachine: Parsing certificate...
	I0116 03:04:47.809245 1925638 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-846462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 03:04:47.828314 1925638 cli_runner.go:211] docker network inspect ingress-addon-legacy-846462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 03:04:47.828399 1925638 network_create.go:281] running [docker network inspect ingress-addon-legacy-846462] to gather additional debugging logs...
	I0116 03:04:47.828420 1925638 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-846462
	W0116 03:04:47.845976 1925638 cli_runner.go:211] docker network inspect ingress-addon-legacy-846462 returned with exit code 1
	I0116 03:04:47.846012 1925638 network_create.go:284] error running [docker network inspect ingress-addon-legacy-846462]: docker network inspect ingress-addon-legacy-846462: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-846462 not found
	I0116 03:04:47.846028 1925638 network_create.go:286] output of [docker network inspect ingress-addon-legacy-846462]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-846462 not found
	
	** /stderr **
	I0116 03:04:47.846146 1925638 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:04:47.863363 1925638 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40001194d0}
	I0116 03:04:47.863432 1925638 network_create.go:124] attempt to create docker network ingress-addon-legacy-846462 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 03:04:47.863490 1925638 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-846462 ingress-addon-legacy-846462
	I0116 03:04:47.939246 1925638 network_create.go:108] docker network ingress-addon-legacy-846462 192.168.49.0/24 created
	I0116 03:04:47.939282 1925638 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-846462" container
	I0116 03:04:47.939354 1925638 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 03:04:47.956650 1925638 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-846462 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-846462 --label created_by.minikube.sigs.k8s.io=true
	I0116 03:04:47.975021 1925638 oci.go:103] Successfully created a docker volume ingress-addon-legacy-846462
	I0116 03:04:47.975106 1925638 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-846462-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-846462 --entrypoint /usr/bin/test -v ingress-addon-legacy-846462:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 03:04:49.473531 1925638 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-846462-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-846462 --entrypoint /usr/bin/test -v ingress-addon-legacy-846462:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.498385424s)
	I0116 03:04:49.473568 1925638 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-846462
	I0116 03:04:49.473589 1925638 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0116 03:04:49.473609 1925638 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 03:04:49.473692 1925638 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-846462:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 03:04:54.392784 1925638 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-846462:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.919045283s)
	I0116 03:04:54.392817 1925638 kic.go:203] duration metric: took 4.919205 seconds to extract preloaded images to volume
	W0116 03:04:54.392960 1925638 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 03:04:54.393069 1925638 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 03:04:54.464479 1925638 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-846462 --name ingress-addon-legacy-846462 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-846462 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-846462 --network ingress-addon-legacy-846462 --ip 192.168.49.2 --volume ingress-addon-legacy-846462:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 03:04:54.795743 1925638 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-846462 --format={{.State.Running}}
	I0116 03:04:54.819962 1925638 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-846462 --format={{.State.Status}}
	I0116 03:04:54.842301 1925638 cli_runner.go:164] Run: docker exec ingress-addon-legacy-846462 stat /var/lib/dpkg/alternatives/iptables
	I0116 03:04:54.913297 1925638 oci.go:144] the created container "ingress-addon-legacy-846462" has a running status.
	I0116 03:04:54.913324 1925638 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa...
	I0116 03:04:56.148959 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 03:04:56.149007 1925638 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 03:04:56.170047 1925638 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-846462 --format={{.State.Status}}
	I0116 03:04:56.187947 1925638 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 03:04:56.187973 1925638 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-846462 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 03:04:56.243320 1925638 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-846462 --format={{.State.Status}}
	I0116 03:04:56.261791 1925638 machine.go:88] provisioning docker machine ...
	I0116 03:04:56.261820 1925638 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-846462"
	I0116 03:04:56.261897 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:04:56.284976 1925638 main.go:141] libmachine: Using SSH client type: native
	I0116 03:04:56.285411 1925638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35043 <nil> <nil>}
	I0116 03:04:56.285423 1925638 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-846462 && echo "ingress-addon-legacy-846462" | sudo tee /etc/hostname
	I0116 03:04:56.436204 1925638 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-846462
	
	I0116 03:04:56.436301 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:04:56.454954 1925638 main.go:141] libmachine: Using SSH client type: native
	I0116 03:04:56.455376 1925638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35043 <nil> <nil>}
	I0116 03:04:56.455402 1925638 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-846462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-846462/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-846462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:04:56.594640 1925638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:04:56.594730 1925638 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17967-1885793/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-1885793/.minikube}
	I0116 03:04:56.594780 1925638 ubuntu.go:177] setting up certificates
	I0116 03:04:56.594803 1925638 provision.go:83] configureAuth start
	I0116 03:04:56.594876 1925638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-846462
	I0116 03:04:56.612823 1925638 provision.go:138] copyHostCerts
	I0116 03:04:56.612866 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.pem
	I0116 03:04:56.612898 1925638 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.pem, removing ...
	I0116 03:04:56.612904 1925638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.pem
	I0116 03:04:56.612980 1925638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.pem (1078 bytes)
	I0116 03:04:56.613062 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-1885793/.minikube/cert.pem
	I0116 03:04:56.613079 1925638 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-1885793/.minikube/cert.pem, removing ...
	I0116 03:04:56.613083 1925638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-1885793/.minikube/cert.pem
	I0116 03:04:56.613110 1925638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/cert.pem (1123 bytes)
	I0116 03:04:56.613156 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-1885793/.minikube/key.pem
	I0116 03:04:56.613171 1925638 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-1885793/.minikube/key.pem, removing ...
	I0116 03:04:56.613175 1925638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-1885793/.minikube/key.pem
	I0116 03:04:56.613201 1925638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-1885793/.minikube/key.pem (1679 bytes)
	I0116 03:04:56.613276 1925638 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-846462 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-846462]
	I0116 03:04:57.343183 1925638 provision.go:172] copyRemoteCerts
	I0116 03:04:57.343279 1925638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:04:57.343324 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:04:57.361209 1925638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35043 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa Username:docker}
	I0116 03:04:57.460152 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 03:04:57.460257 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:04:57.488984 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 03:04:57.489061 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:04:57.517772 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 03:04:57.517839 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:04:57.546522 1925638 provision.go:86] duration metric: configureAuth took 951.692163ms
	I0116 03:04:57.546596 1925638 ubuntu.go:193] setting minikube options for container-runtime
	I0116 03:04:57.546796 1925638 config.go:182] Loaded profile config "ingress-addon-legacy-846462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0116 03:04:57.546812 1925638 machine.go:91] provisioned docker machine in 1.285003002s
	I0116 03:04:57.546821 1925638 client.go:171] LocalClient.Create took 9.738169397s
	I0116 03:04:57.546844 1925638 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-846462" took 9.73822903s
	I0116 03:04:57.546856 1925638 start.go:300] post-start starting for "ingress-addon-legacy-846462" (driver="docker")
	I0116 03:04:57.546866 1925638 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:04:57.546926 1925638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:04:57.546984 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:04:57.566525 1925638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35043 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa Username:docker}
	I0116 03:04:57.664369 1925638 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:04:57.669695 1925638 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 03:04:57.669750 1925638 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 03:04:57.669767 1925638 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 03:04:57.669779 1925638 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 03:04:57.669792 1925638 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-1885793/.minikube/addons for local assets ...
	I0116 03:04:57.669875 1925638 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-1885793/.minikube/files for local assets ...
	I0116 03:04:57.669970 1925638 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-1885793/.minikube/files/etc/ssl/certs/18911652.pem -> 18911652.pem in /etc/ssl/certs
	I0116 03:04:57.669985 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/files/etc/ssl/certs/18911652.pem -> /etc/ssl/certs/18911652.pem
	I0116 03:04:57.670113 1925638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:04:57.680754 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/files/etc/ssl/certs/18911652.pem --> /etc/ssl/certs/18911652.pem (1708 bytes)
	I0116 03:04:57.709020 1925638 start.go:303] post-start completed in 162.150072ms
	I0116 03:04:57.709455 1925638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-846462
	I0116 03:04:57.727427 1925638 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/config.json ...
	I0116 03:04:57.727714 1925638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:04:57.727760 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:04:57.745662 1925638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35043 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa Username:docker}
	I0116 03:04:57.839795 1925638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 03:04:57.845540 1925638 start.go:128] duration metric: createHost completed in 10.039911116s
	I0116 03:04:57.845567 1925638 start.go:83] releasing machines lock for "ingress-addon-legacy-846462", held for 10.040044822s
	I0116 03:04:57.845650 1925638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-846462
	I0116 03:04:57.863302 1925638 ssh_runner.go:195] Run: cat /version.json
	I0116 03:04:57.863323 1925638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:04:57.863355 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:04:57.863383 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:04:57.883181 1925638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35043 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa Username:docker}
	I0116 03:04:57.884941 1925638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35043 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa Username:docker}
	I0116 03:04:58.119167 1925638 ssh_runner.go:195] Run: systemctl --version
	I0116 03:04:58.124941 1925638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:04:58.130637 1925638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0116 03:04:58.161360 1925638 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0116 03:04:58.161485 1925638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:04:58.195728 1925638 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 03:04:58.195755 1925638 start.go:475] detecting cgroup driver to use...
	I0116 03:04:58.195823 1925638 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 03:04:58.195911 1925638 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 03:04:58.210791 1925638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:04:58.224859 1925638 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:04:58.224952 1925638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:04:58.241685 1925638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:04:58.258883 1925638 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:04:58.357773 1925638 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:04:58.466968 1925638 docker.go:233] disabling docker service ...
	I0116 03:04:58.467055 1925638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:04:58.491522 1925638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:04:58.505366 1925638 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:04:58.607318 1925638 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:04:58.712478 1925638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:04:58.726381 1925638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:04:58.746675 1925638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0116 03:04:58.758743 1925638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 03:04:58.770778 1925638 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 03:04:58.770860 1925638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 03:04:58.782855 1925638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:04:58.794637 1925638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 03:04:58.806502 1925638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:04:58.818290 1925638 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:04:58.829194 1925638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 03:04:58.840892 1925638 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:04:58.851426 1925638 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:04:58.861801 1925638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:04:58.959595 1925638 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 03:04:59.087347 1925638 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0116 03:04:59.087462 1925638 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0116 03:04:59.092033 1925638 start.go:543] Will wait 60s for crictl version
	I0116 03:04:59.092120 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:04:59.096334 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:04:59.143045 1925638 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0116 03:04:59.143151 1925638 ssh_runner.go:195] Run: containerd --version
	I0116 03:04:59.171354 1925638 ssh_runner.go:195] Run: containerd --version
	I0116 03:04:59.200228 1925638 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.26 ...
	I0116 03:04:59.202263 1925638 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-846462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:04:59.219384 1925638 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 03:04:59.224436 1925638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:04:59.237327 1925638 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0116 03:04:59.237400 1925638 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:04:59.276311 1925638 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 03:04:59.276381 1925638 ssh_runner.go:195] Run: which lz4
	I0116 03:04:59.280633 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0116 03:04:59.280739 1925638 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:04:59.285076 1925638 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:04:59.285109 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0116 03:05:01.646796 1925638 containerd.go:548] Took 2.366094 seconds to copy over tarball
	I0116 03:05:01.646872 1925638 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:05:04.368174 1925638 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.721274856s)
	I0116 03:05:04.368203 1925638 containerd.go:555] Took 2.721381 seconds to extract the tarball
	I0116 03:05:04.368214 1925638 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:05:04.452690 1925638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:05:04.549993 1925638 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 03:05:04.703759 1925638 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:05:04.748659 1925638 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 03:05:04.748689 1925638 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:05:04.748764 1925638 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:05:04.748960 1925638 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 03:05:04.749035 1925638 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 03:05:04.749127 1925638 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 03:05:04.749203 1925638 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 03:05:04.749270 1925638 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0116 03:05:04.749335 1925638 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0116 03:05:04.749418 1925638 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0116 03:05:04.750339 1925638 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 03:05:04.750731 1925638 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:05:04.750974 1925638 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0116 03:05:04.751120 1925638 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0116 03:05:04.751236 1925638 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 03:05:04.751321 1925638 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 03:05:04.751611 1925638 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0116 03:05:04.751666 1925638 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 03:05:05.104334 1925638 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c"
	I0116 03:05:05.104450 1925638 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0116 03:05:05.105806 1925638 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 03:05:05.105942 1925638 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257"
	I0116 03:05:05.105998 1925638 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0116 03:05:05.114630 1925638 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0116 03:05:05.114766 1925638 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c"
	I0116 03:05:05.114829 1925638 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0116 03:05:05.128288 1925638 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 03:05:05.128410 1925638 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7"
	I0116 03:05:05.128471 1925638 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0116 03:05:05.131592 1925638 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 03:05:05.131809 1925638 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18"
	I0116 03:05:05.131866 1925638 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0116 03:05:05.140352 1925638 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 03:05:05.140478 1925638 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79"
	I0116 03:05:05.140546 1925638 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0116 03:05:05.151153 1925638 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0116 03:05:05.151308 1925638 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03"
	I0116 03:05:05.151366 1925638 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0116 03:05:05.269590 1925638 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0116 03:05:05.269790 1925638 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I0116 03:05:05.269881 1925638 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0116 03:05:05.623936 1925638 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0116 03:05:05.623975 1925638 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 03:05:05.624027 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:05:05.624141 1925638 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0116 03:05:05.624168 1925638 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0116 03:05:05.624230 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:05:05.992991 1925638 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0116 03:05:05.993033 1925638 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0116 03:05:05.993084 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:05:06.041127 1925638 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0116 03:05:06.041223 1925638 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 03:05:06.041306 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:05:06.041920 1925638 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0116 03:05:06.041982 1925638 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 03:05:06.042049 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:05:06.042602 1925638 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0116 03:05:06.042656 1925638 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 03:05:06.042719 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:05:06.070303 1925638 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0116 03:05:06.070412 1925638 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0116 03:05:06.070499 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:05:06.086996 1925638 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0116 03:05:06.087104 1925638 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:05:06.087187 1925638 ssh_runner.go:195] Run: which crictl
	I0116 03:05:06.087311 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0116 03:05:06.087467 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0116 03:05:06.087542 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0116 03:05:06.087568 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 03:05:06.087604 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0116 03:05:06.087637 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0116 03:05:06.087406 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0116 03:05:06.284306 1925638 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0116 03:05:06.284363 1925638 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0116 03:05:06.284395 1925638 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0116 03:05:06.284439 1925638 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0116 03:05:06.284476 1925638 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0116 03:05:06.284476 1925638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:05:06.284556 1925638 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0116 03:05:06.284592 1925638 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0116 03:05:06.343904 1925638 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:05:06.344001 1925638 cache_images.go:92] LoadImages completed in 1.595296272s
	W0116 03:05:06.344099 1925638 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0116 03:05:06.344157 1925638 ssh_runner.go:195] Run: sudo crictl info
	I0116 03:05:06.385818 1925638 cni.go:84] Creating CNI manager for ""
	I0116 03:05:06.385843 1925638 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 03:05:06.385892 1925638 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:05:06.385918 1925638 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-846462 NodeName:ingress-addon-legacy-846462 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:05:06.386078 1925638 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-846462"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:05:06.386150 1925638 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-846462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-846462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:05:06.386223 1925638 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0116 03:05:06.397094 1925638 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:05:06.397188 1925638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:05:06.407727 1925638 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0116 03:05:06.429279 1925638 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0116 03:05:06.450857 1925638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0116 03:05:06.472520 1925638 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 03:05:06.477096 1925638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:05:06.491275 1925638 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462 for IP: 192.168.49.2
	I0116 03:05:06.491307 1925638 certs.go:190] acquiring lock for shared ca certs: {Name:mk53d39e364f11aa45d491413f4acdef0406f659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:06.491460 1925638 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key
	I0116 03:05:06.491509 1925638 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key
	I0116 03:05:06.491564 1925638 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.key
	I0116 03:05:06.491579 1925638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt with IP's: []
	I0116 03:05:07.489544 1925638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt ...
	I0116 03:05:07.489579 1925638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: {Name:mkbb226dc696707c7a3f908ae6cd8d9293de547f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:07.489791 1925638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.key ...
	I0116 03:05:07.489820 1925638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.key: {Name:mk560191cb32031fad582dfa12cd52928d8bf4e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:07.489916 1925638 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.key.dd3b5fb2
	I0116 03:05:07.489936 1925638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 03:05:07.716260 1925638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.crt.dd3b5fb2 ...
	I0116 03:05:07.716294 1925638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.crt.dd3b5fb2: {Name:mk91fa8416e95bc9e3dfe75a20eee58cbd205cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:07.716477 1925638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.key.dd3b5fb2 ...
	I0116 03:05:07.716498 1925638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.key.dd3b5fb2: {Name:mk5f2af25a68fb781d0383e92cac4d7cead80c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:07.716589 1925638 certs.go:337] copying /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.crt
	I0116 03:05:07.716668 1925638 certs.go:341] copying /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.key
	I0116 03:05:07.716728 1925638 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.key
	I0116 03:05:07.716748 1925638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.crt with IP's: []
	I0116 03:05:08.079775 1925638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.crt ...
	I0116 03:05:08.079806 1925638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.crt: {Name:mk904156eb44d387dae703e77851253ce85c5675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:08.080006 1925638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.key ...
	I0116 03:05:08.080027 1925638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.key: {Name:mkca0bb67ff3f8736c57d13432c9e542cd3116aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:08.080109 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 03:05:08.080131 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 03:05:08.080147 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 03:05:08.080163 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 03:05:08.080175 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:05:08.080191 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:05:08.080205 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:05:08.080216 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:05:08.080274 1925638 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/1891165.pem (1338 bytes)
	W0116 03:05:08.080319 1925638 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/1891165_empty.pem, impossibly tiny 0 bytes
	I0116 03:05:08.080329 1925638 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:05:08.080368 1925638 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:05:08.080401 1925638 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:05:08.080428 1925638 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/certs/key.pem (1679 bytes)
	I0116 03:05:08.080480 1925638 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-1885793/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-1885793/.minikube/files/etc/ssl/certs/18911652.pem (1708 bytes)
	I0116 03:05:08.080519 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/files/etc/ssl/certs/18911652.pem -> /usr/share/ca-certificates/18911652.pem
	I0116 03:05:08.080541 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:05:08.080555 1925638 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/1891165.pem -> /usr/share/ca-certificates/1891165.pem
	I0116 03:05:08.081125 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:05:08.109326 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:05:08.137865 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:05:08.166401 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:05:08.194703 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:05:08.223094 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 03:05:08.250982 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:05:08.279342 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 03:05:08.307772 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/files/etc/ssl/certs/18911652.pem --> /usr/share/ca-certificates/18911652.pem (1708 bytes)
	I0116 03:05:08.337731 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:05:08.366141 1925638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-1885793/.minikube/certs/1891165.pem --> /usr/share/ca-certificates/1891165.pem (1338 bytes)
	I0116 03:05:08.394577 1925638 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:05:08.416057 1925638 ssh_runner.go:195] Run: openssl version
	I0116 03:05:08.423067 1925638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18911652.pem && ln -fs /usr/share/ca-certificates/18911652.pem /etc/ssl/certs/18911652.pem"
	I0116 03:05:08.434805 1925638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18911652.pem
	I0116 03:05:08.439857 1925638 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 03:01 /usr/share/ca-certificates/18911652.pem
	I0116 03:05:08.439969 1925638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18911652.pem
	I0116 03:05:08.448568 1925638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18911652.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:05:08.460246 1925638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:05:08.471802 1925638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:05:08.476922 1925638 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:55 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:05:08.476997 1925638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:05:08.485692 1925638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:05:08.497852 1925638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1891165.pem && ln -fs /usr/share/ca-certificates/1891165.pem /etc/ssl/certs/1891165.pem"
	I0116 03:05:08.509864 1925638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1891165.pem
	I0116 03:05:08.514699 1925638 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 03:01 /usr/share/ca-certificates/1891165.pem
	I0116 03:05:08.514769 1925638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1891165.pem
	I0116 03:05:08.523748 1925638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1891165.pem /etc/ssl/certs/51391683.0"
	I0116 03:05:08.535373 1925638 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:05:08.539819 1925638 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:05:08.539872 1925638 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-846462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-846462 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:05:08.539947 1925638 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0116 03:05:08.540005 1925638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:05:08.580170 1925638 cri.go:89] found id: ""
	I0116 03:05:08.580241 1925638 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:05:08.590901 1925638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:05:08.601825 1925638 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 03:05:08.601932 1925638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:05:08.613134 1925638 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:05:08.613195 1925638 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 03:05:08.670027 1925638 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0116 03:05:08.670360 1925638 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:05:08.728308 1925638 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 03:05:08.728407 1925638 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 03:05:08.728461 1925638 kubeadm.go:322] OS: Linux
	I0116 03:05:08.728512 1925638 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 03:05:08.728588 1925638 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 03:05:08.728680 1925638 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 03:05:08.728743 1925638 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 03:05:08.728792 1925638 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 03:05:08.728849 1925638 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 03:05:08.824911 1925638 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:05:08.825043 1925638 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:05:08.825196 1925638 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:05:09.060667 1925638 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:05:09.062093 1925638 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:05:09.062332 1925638 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:05:09.177882 1925638 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:05:09.182059 1925638 out.go:204]   - Generating certificates and keys ...
	I0116 03:05:09.182295 1925638 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:05:09.182411 1925638 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:05:09.386793 1925638 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 03:05:09.985256 1925638 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 03:05:10.755819 1925638 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 03:05:11.746131 1925638 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 03:05:12.341105 1925638 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 03:05:12.341322 1925638 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-846462 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 03:05:12.569036 1925638 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 03:05:12.569463 1925638 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-846462 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 03:05:13.472034 1925638 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 03:05:14.014162 1925638 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 03:05:14.304827 1925638 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 03:05:14.305059 1925638 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:05:14.746902 1925638 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:05:15.112157 1925638 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:05:16.021806 1925638 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:05:16.241125 1925638 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:05:16.241834 1925638 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:05:16.244680 1925638 out.go:204]   - Booting up control plane ...
	I0116 03:05:16.244770 1925638 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:05:16.261832 1925638 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:05:16.261911 1925638 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:05:16.261988 1925638 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:05:16.263064 1925638 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:05:27.765504 1925638 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502405 seconds
	I0116 03:05:27.765624 1925638 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:05:27.778754 1925638 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:05:28.301346 1925638 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:05:28.301554 1925638 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-846462 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:05:28.811761 1925638 kubeadm.go:322] [bootstrap-token] Using token: ghzt78.ofvkbg4xr6cest0z
	I0116 03:05:28.813986 1925638 out.go:204]   - Configuring RBAC rules ...
	I0116 03:05:28.814098 1925638 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:05:28.825542 1925638 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:05:28.848588 1925638 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:05:28.853185 1925638 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:05:28.857365 1925638 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:05:28.861802 1925638 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:05:28.874001 1925638 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:05:29.142219 1925638 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:05:29.246059 1925638 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:05:29.248780 1925638 kubeadm.go:322] 
	I0116 03:05:29.248856 1925638 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:05:29.248873 1925638 kubeadm.go:322] 
	I0116 03:05:29.248945 1925638 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:05:29.248951 1925638 kubeadm.go:322] 
	I0116 03:05:29.248974 1925638 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:05:29.249029 1925638 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:05:29.249077 1925638 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:05:29.249081 1925638 kubeadm.go:322] 
	I0116 03:05:29.249130 1925638 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:05:29.249199 1925638 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:05:29.249264 1925638 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:05:29.249268 1925638 kubeadm.go:322] 
	I0116 03:05:29.249346 1925638 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:05:29.249418 1925638 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:05:29.249422 1925638 kubeadm.go:322] 
	I0116 03:05:29.249522 1925638 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ghzt78.ofvkbg4xr6cest0z \
	I0116 03:05:29.249624 1925638 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6218d0988b2a7aa9cfeacd0df5d75f7b2af48c94d0234c3fb2bf032e099bbd3 \
	I0116 03:05:29.249646 1925638 kubeadm.go:322]     --control-plane 
	I0116 03:05:29.249650 1925638 kubeadm.go:322] 
	I0116 03:05:29.249736 1925638 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:05:29.249741 1925638 kubeadm.go:322] 
	I0116 03:05:29.249825 1925638 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ghzt78.ofvkbg4xr6cest0z \
	I0116 03:05:29.249924 1925638 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6218d0988b2a7aa9cfeacd0df5d75f7b2af48c94d0234c3fb2bf032e099bbd3 
	I0116 03:05:29.253765 1925638 kubeadm.go:322] W0116 03:05:08.669148    1093 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0116 03:05:29.253981 1925638 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 03:05:29.254079 1925638 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:05:29.254198 1925638 kubeadm.go:322] W0116 03:05:16.257638    1093 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 03:05:29.254315 1925638 kubeadm.go:322] W0116 03:05:16.259158    1093 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 03:05:29.254336 1925638 cni.go:84] Creating CNI manager for ""
	I0116 03:05:29.254344 1925638 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 03:05:29.256680 1925638 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 03:05:29.258883 1925638 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:05:29.263976 1925638 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0116 03:05:29.264000 1925638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:05:29.287444 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:05:29.729161 1925638 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:05:29.729313 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:29.729400 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=ingress-addon-legacy-846462 minikube.k8s.io/updated_at=2024_01_16T03_05_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:29.887424 1925638 ops.go:34] apiserver oom_adj: -16
	I0116 03:05:29.887487 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:30.387827 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:30.888236 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:31.387671 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:31.888569 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:32.388327 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:32.887795 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:33.388480 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:33.887724 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:34.387765 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:34.888263 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:35.388112 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:35.888507 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:36.387604 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:36.888313 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:37.388485 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:37.887829 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:38.388178 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:38.888322 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:39.388426 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:39.888237 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:40.387929 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:40.888523 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:41.387612 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:41.888209 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:42.388513 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:42.887958 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:43.388424 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:43.887603 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:44.388560 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:44.888611 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:45.388364 1925638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:05:45.504873 1925638 kubeadm.go:1088] duration metric: took 15.775618846s to wait for elevateKubeSystemPrivileges.
	I0116 03:05:45.504903 1925638 kubeadm.go:406] StartCluster complete in 36.965036379s
	I0116 03:05:45.504920 1925638 settings.go:142] acquiring lock: {Name:mk5ef3d7839aa1301dd151a46eaf62e1b5658d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:45.504982 1925638 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 03:05:45.505717 1925638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-1885793/kubeconfig: {Name:mk03027f3f7cf4dc9d608a622efae9ada84d58d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:05:45.506566 1925638 kapi.go:59] client config for ingress-addon-legacy-846462: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.key", CAFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:05:45.507642 1925638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:05:45.507959 1925638 config.go:182] Loaded profile config "ingress-addon-legacy-846462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0116 03:05:45.508010 1925638 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:05:45.508082 1925638 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-846462"
	I0116 03:05:45.508100 1925638 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-846462"
	I0116 03:05:45.508133 1925638 host.go:66] Checking if "ingress-addon-legacy-846462" exists ...
	I0116 03:05:45.508137 1925638 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 03:05:45.508169 1925638 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-846462"
	I0116 03:05:45.508181 1925638 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-846462"
	I0116 03:05:45.508461 1925638 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-846462 --format={{.State.Status}}
	I0116 03:05:45.508639 1925638 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-846462 --format={{.State.Status}}
	I0116 03:05:45.575958 1925638 kapi.go:59] client config for ingress-addon-legacy-846462: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.key", CAFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:05:45.576217 1925638 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-846462"
	I0116 03:05:45.576252 1925638 host.go:66] Checking if "ingress-addon-legacy-846462" exists ...
	I0116 03:05:45.576711 1925638 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-846462 --format={{.State.Status}}
	I0116 03:05:45.582709 1925638 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:05:45.588529 1925638 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:05:45.588551 1925638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:05:45.588631 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:05:45.646689 1925638 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:05:45.646711 1925638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:05:45.646798 1925638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-846462
	I0116 03:05:45.663978 1925638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35043 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa Username:docker}
	I0116 03:05:45.698690 1925638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35043 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/ingress-addon-legacy-846462/id_rsa Username:docker}
	I0116 03:05:45.758147 1925638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:05:45.975276 1925638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:05:45.979622 1925638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:05:46.010074 1925638 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-846462" context rescaled to 1 replicas
	I0116 03:05:46.010112 1925638 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 03:05:46.012278 1925638 out.go:177] * Verifying Kubernetes components...
	I0116 03:05:46.014729 1925638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:05:46.264599 1925638 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 03:05:46.558332 1925638 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0116 03:05:46.557096 1925638 kapi.go:59] client config for ingress-addon-legacy-846462: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.key", CAFile:"/home/jenkins/minikube-integration/17967-1885793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:05:46.560554 1925638 addons.go:505] enable addons completed in 1.052536684s: enabled=[default-storageclass storage-provisioner]
	I0116 03:05:46.558792 1925638 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-846462" to be "Ready" ...
	I0116 03:05:46.582528 1925638 node_ready.go:49] node "ingress-addon-legacy-846462" has status "Ready":"True"
	I0116 03:05:46.582551 1925638 node_ready.go:38] duration metric: took 21.964014ms waiting for node "ingress-addon-legacy-846462" to be "Ready" ...
	I0116 03:05:46.582563 1925638 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:05:46.599437 1925638 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace to be "Ready" ...
	I0116 03:05:48.605604 1925638 pod_ready.go:102] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:05:51.105361 1925638 pod_ready.go:102] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:05:53.106101 1925638 pod_ready.go:102] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:05:55.605824 1925638 pod_ready.go:102] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:05:58.105282 1925638 pod_ready.go:102] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:06:00.106102 1925638 pod_ready.go:102] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:06:02.604711 1925638 pod_ready.go:102] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:06:04.605313 1925638 pod_ready.go:102] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:06:06.105999 1925638 pod_ready.go:92] pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:06:06.106027 1925638 pod_ready.go:81] duration metric: took 19.506511261s waiting for pod "coredns-66bff467f8-f9qfl" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.106038 1925638 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-846462" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.111818 1925638 pod_ready.go:92] pod "etcd-ingress-addon-legacy-846462" in "kube-system" namespace has status "Ready":"True"
	I0116 03:06:06.111844 1925638 pod_ready.go:81] duration metric: took 5.798201ms waiting for pod "etcd-ingress-addon-legacy-846462" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.111859 1925638 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-846462" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.116698 1925638 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-846462" in "kube-system" namespace has status "Ready":"True"
	I0116 03:06:06.116726 1925638 pod_ready.go:81] duration metric: took 4.857666ms waiting for pod "kube-apiserver-ingress-addon-legacy-846462" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.116739 1925638 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-846462" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.121782 1925638 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-846462" in "kube-system" namespace has status "Ready":"True"
	I0116 03:06:06.121809 1925638 pod_ready.go:81] duration metric: took 5.062304ms waiting for pod "kube-controller-manager-ingress-addon-legacy-846462" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.121821 1925638 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xhgvj" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.126723 1925638 pod_ready.go:92] pod "kube-proxy-xhgvj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:06:06.126750 1925638 pod_ready.go:81] duration metric: took 4.921771ms waiting for pod "kube-proxy-xhgvj" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.126766 1925638 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-846462" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.301125 1925638 request.go:629] Waited for 174.295071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-846462
	I0116 03:06:06.501603 1925638 request.go:629] Waited for 197.830198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-846462
	I0116 03:06:06.504443 1925638 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-846462" in "kube-system" namespace has status "Ready":"True"
	I0116 03:06:06.504470 1925638 pod_ready.go:81] duration metric: took 377.69612ms waiting for pod "kube-scheduler-ingress-addon-legacy-846462" in "kube-system" namespace to be "Ready" ...
	I0116 03:06:06.504483 1925638 pod_ready.go:38] duration metric: took 19.921910079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:06:06.504521 1925638 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:06:06.504599 1925638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:06:06.518452 1925638 api_server.go:72] duration metric: took 20.508303818s to wait for apiserver process to appear ...
	I0116 03:06:06.518477 1925638 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:06:06.518498 1925638 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 03:06:06.527272 1925638 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 03:06:06.528092 1925638 api_server.go:141] control plane version: v1.18.20
	I0116 03:06:06.528116 1925638 api_server.go:131] duration metric: took 9.632175ms to wait for apiserver health ...
	I0116 03:06:06.528125 1925638 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:06:06.701473 1925638 request.go:629] Waited for 173.264569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:06:06.707353 1925638 system_pods.go:59] 8 kube-system pods found
	I0116 03:06:06.707389 1925638 system_pods.go:61] "coredns-66bff467f8-f9qfl" [6c4a9e9a-5d22-424f-bfa2-88c8a56e151e] Running
	I0116 03:06:06.707395 1925638 system_pods.go:61] "etcd-ingress-addon-legacy-846462" [cc758dbd-d266-4950-a30a-178e9bc38095] Running
	I0116 03:06:06.707433 1925638 system_pods.go:61] "kindnet-skvhg" [bde7c977-6a14-4e04-8872-b1ea813d0f1e] Running
	I0116 03:06:06.707450 1925638 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-846462" [7c366f0e-1d7d-4231-a27f-464420ce6a13] Running
	I0116 03:06:06.707455 1925638 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-846462" [896b6c5c-7aad-4e86-9f53-9466d1839352] Running
	I0116 03:06:06.707461 1925638 system_pods.go:61] "kube-proxy-xhgvj" [c774b455-1e24-4b25-97c3-bb04e123a400] Running
	I0116 03:06:06.707468 1925638 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-846462" [e96b326c-322f-41e7-90bd-d0fc41290c6e] Running
	I0116 03:06:06.707474 1925638 system_pods.go:61] "storage-provisioner" [362cd79c-547f-4855-acdb-313c14f1b2d2] Running
	I0116 03:06:06.707483 1925638 system_pods.go:74] duration metric: took 179.352419ms to wait for pod list to return data ...
	I0116 03:06:06.707498 1925638 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:06:06.900826 1925638 request.go:629] Waited for 193.240628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 03:06:06.903239 1925638 default_sa.go:45] found service account: "default"
	I0116 03:06:06.903265 1925638 default_sa.go:55] duration metric: took 195.75852ms for default service account to be created ...
	I0116 03:06:06.903275 1925638 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:06:07.101691 1925638 request.go:629] Waited for 198.339362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:06:07.107574 1925638 system_pods.go:86] 8 kube-system pods found
	I0116 03:06:07.107604 1925638 system_pods.go:89] "coredns-66bff467f8-f9qfl" [6c4a9e9a-5d22-424f-bfa2-88c8a56e151e] Running
	I0116 03:06:07.107616 1925638 system_pods.go:89] "etcd-ingress-addon-legacy-846462" [cc758dbd-d266-4950-a30a-178e9bc38095] Running
	I0116 03:06:07.107622 1925638 system_pods.go:89] "kindnet-skvhg" [bde7c977-6a14-4e04-8872-b1ea813d0f1e] Running
	I0116 03:06:07.107662 1925638 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-846462" [7c366f0e-1d7d-4231-a27f-464420ce6a13] Running
	I0116 03:06:07.107674 1925638 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-846462" [896b6c5c-7aad-4e86-9f53-9466d1839352] Running
	I0116 03:06:07.107679 1925638 system_pods.go:89] "kube-proxy-xhgvj" [c774b455-1e24-4b25-97c3-bb04e123a400] Running
	I0116 03:06:07.107684 1925638 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-846462" [e96b326c-322f-41e7-90bd-d0fc41290c6e] Running
	I0116 03:06:07.107689 1925638 system_pods.go:89] "storage-provisioner" [362cd79c-547f-4855-acdb-313c14f1b2d2] Running
	I0116 03:06:07.107700 1925638 system_pods.go:126] duration metric: took 204.419475ms to wait for k8s-apps to be running ...
	I0116 03:06:07.107716 1925638 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:06:07.107775 1925638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:06:07.121855 1925638 system_svc.go:56] duration metric: took 14.133978ms WaitForService to wait for kubelet.
	I0116 03:06:07.121884 1925638 kubeadm.go:581] duration metric: took 21.111742748s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:06:07.121903 1925638 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:06:07.301292 1925638 request.go:629] Waited for 179.293319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0116 03:06:07.304271 1925638 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 03:06:07.304304 1925638 node_conditions.go:123] node cpu capacity is 2
	I0116 03:06:07.304317 1925638 node_conditions.go:105] duration metric: took 182.408774ms to run NodePressure ...
	I0116 03:06:07.304362 1925638 start.go:228] waiting for startup goroutines ...
	I0116 03:06:07.304370 1925638 start.go:233] waiting for cluster config update ...
	I0116 03:06:07.304384 1925638 start.go:242] writing updated cluster config ...
	I0116 03:06:07.304676 1925638 ssh_runner.go:195] Run: rm -f paused
	I0116 03:06:07.367521 1925638 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0116 03:06:07.370085 1925638 out.go:177] 
	W0116 03:06:07.372494 1925638 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0116 03:06:07.374540 1925638 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0116 03:06:07.376419 1925638 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-846462" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e37fc014c3a2       dd1b12fcb6097       15 seconds ago       Exited              hello-world-app           2                   80c7e7e85da67       hello-world-app-5f5d8b66bb-gv7kr
	ab46268f7e0f0       74077e780ec71       38 seconds ago       Running             nginx                     0                   f36baf5c2c9da       nginx
	714cedbd1e8da       d7f0cba3aa5bf       56 seconds ago       Exited              controller                0                   66d5b87b16d3c       ingress-nginx-controller-7fcf777cb7-l5vxq
	4fe858aa120e9       a883f7fc35610       About a minute ago   Exited              patch                     0                   7d28f94ad52b0       ingress-nginx-admission-patch-t467g
	75260c6aa3db0       a883f7fc35610       About a minute ago   Exited              create                    0                   793e767113c73       ingress-nginx-admission-create-9nbj2
	49abf7da36a1c       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   951da080eacef       coredns-66bff467f8-f9qfl
	95be1151d8ac9       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   5ab50bba0340f       storage-provisioner
	218965e0989cb       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   bc6ed0dec10a5       kindnet-skvhg
	547c7028bddf9       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   0bd875e7830ef       kube-proxy-xhgvj
	05104473460b1       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   7779da2d746d2       kube-controller-manager-ingress-addon-legacy-846462
	579addb39af6d       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   924c613fd2637       etcd-ingress-addon-legacy-846462
	b8641b81c05ce       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   001c399b168c4       kube-apiserver-ingress-addon-legacy-846462
	5ecdbfbc0f607       095f37015706d       About a minute ago   Running             kube-scheduler            0                   43564358c7d86       kube-scheduler-ingress-addon-legacy-846462
	
	
	==> containerd <==
	Jan 16 03:06:56 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:06:56.864512068Z" level=info msg="RemoveContainer for \"42dd2e6d7e943009b03d51c726fe54e7b2aca5c461090cc052365a5a8043ee2a\""
	Jan 16 03:06:56 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:06:56.869896635Z" level=info msg="RemoveContainer for \"42dd2e6d7e943009b03d51c726fe54e7b2aca5c461090cc052365a5a8043ee2a\" returns successfully"
	Jan 16 03:07:03 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:03.540679135Z" level=info msg="StopContainer for \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\" with timeout 2 (s)"
	Jan 16 03:07:03 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:03.541203773Z" level=info msg="Stop container \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\" with signal terminated"
	Jan 16 03:07:03 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:03.555042276Z" level=info msg="StopContainer for \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\" with timeout 2 (s)"
	Jan 16 03:07:03 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:03.569681077Z" level=info msg="Skipping the sending of signal terminated to container \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\" because a prior stop with timeout>0 request already sent the signal"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.570827997Z" level=info msg="Kill container \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\""
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.570842709Z" level=info msg="Kill container \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\""
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.655852660Z" level=info msg="shim disconnected" id=714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.655915338Z" level=warning msg="cleaning up after shim disconnected" id=714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec namespace=k8s.io
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.655929516Z" level=info msg="cleaning up dead shim"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.666045279Z" level=warning msg="cleanup warnings time=\"2024-01-16T03:07:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4585 runtime=io.containerd.runc.v2\n"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.668653171Z" level=info msg="StopContainer for \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\" returns successfully"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.668653286Z" level=info msg="StopContainer for \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\" returns successfully"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.669243710Z" level=info msg="StopPodSandbox for \"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\""
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.669316020Z" level=info msg="Container to stop \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.669624499Z" level=info msg="StopPodSandbox for \"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\""
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.669673302Z" level=info msg="Container to stop \"714cedbd1e8daa3095fcc63e907c760d8bac7d1f06b679ccaf7ef37753db81ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.707049149Z" level=info msg="shim disconnected" id=66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.707115535Z" level=warning msg="cleaning up after shim disconnected" id=66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5 namespace=k8s.io
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.707126128Z" level=info msg="cleaning up dead shim"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.718352800Z" level=warning msg="cleanup warnings time=\"2024-01-16T03:07:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4622 runtime=io.containerd.runc.v2\n"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.760857461Z" level=error msg="StopPodSandbox for \"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\" failed" error="failed to destroy network for sandbox \"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-9c81b3e0f6dde96988846 --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.794411198Z" level=info msg="TearDown network for sandbox \"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\" successfully"
	Jan 16 03:07:05 ingress-addon-legacy-846462 containerd[831]: time="2024-01-16T03:07:05.794463086Z" level=info msg="StopPodSandbox for \"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\" returns successfully"
	
	
	==> coredns [49abf7da36a1c14519c427d98bd1a4cc4b687ff0ab1d746a930613e7bd3d9205] <==
	[INFO] 10.244.0.5:59861 - 41380 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001926098s
	[INFO] 10.244.0.5:48812 - 4771 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000195113s
	[INFO] 10.244.0.5:48812 - 49321 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066115s
	[INFO] 10.244.0.5:59861 - 18186 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000200158s
	[INFO] 10.244.0.5:59954 - 10733 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068888s
	[INFO] 10.244.0.5:59954 - 55012 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00007313s
	[INFO] 10.244.0.5:59954 - 39107 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039376s
	[INFO] 10.244.0.5:48812 - 60355 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001034957s
	[INFO] 10.244.0.5:59954 - 47154 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003698s
	[INFO] 10.244.0.5:59954 - 38628 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051125s
	[INFO] 10.244.0.5:59954 - 45707 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041197s
	[INFO] 10.244.0.5:48812 - 12879 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000896073s
	[INFO] 10.244.0.5:48812 - 54580 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006696s
	[INFO] 10.244.0.5:59954 - 26946 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000961113s
	[INFO] 10.244.0.5:59954 - 47623 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001008447s
	[INFO] 10.244.0.5:59954 - 20791 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000129309s
	[INFO] 10.244.0.5:53667 - 43499 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000161776s
	[INFO] 10.244.0.5:53667 - 52410 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000034009s
	[INFO] 10.244.0.5:53667 - 4154 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033435s
	[INFO] 10.244.0.5:53667 - 32120 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032056s
	[INFO] 10.244.0.5:53667 - 19492 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031326s
	[INFO] 10.244.0.5:53667 - 15208 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036701s
	[INFO] 10.244.0.5:53667 - 60276 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000855278s
	[INFO] 10.244.0.5:53667 - 13236 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000882552s
	[INFO] 10.244.0.5:53667 - 31910 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038465s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-846462
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-846462
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=ingress-addon-legacy-846462
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_05_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:05:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-846462
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:07:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:07:02 +0000   Tue, 16 Jan 2024 03:05:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:07:02 +0000   Tue, 16 Jan 2024 03:05:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:07:02 +0000   Tue, 16 Jan 2024 03:05:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:07:02 +0000   Tue, 16 Jan 2024 03:05:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-846462
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 2404ad38c21d4e6a96e71213f41001b3
	  System UUID:                f8f5d640-2891-4467-b454-9c355a81be1f
	  Boot ID:                    db337b58-1f53-411c-9ff2-b8ff3dd0911c
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-gv7kr                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 coredns-66bff467f8-f9qfl                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     86s
	  kube-system                 etcd-ingress-addon-legacy-846462                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-skvhg                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      86s
	  kube-system                 kube-apiserver-ingress-addon-legacy-846462             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-846462    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-xhgvj                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-scheduler-ingress-addon-legacy-846462             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  113s (x4 over 113s)  kubelet     Node ingress-addon-legacy-846462 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x4 over 113s)  kubelet     Node ingress-addon-legacy-846462 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 113s)  kubelet     Node ingress-addon-legacy-846462 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-846462 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-846462 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-846462 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-846462 status is now: NodeReady
	  Normal  Starting                 85s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001128] FS-Cache: O-key=[8] '43dcc90000000000'
	[  +0.000818] FS-Cache: N-cookie c=0000008a [p=00000081 fl=2 nc=0 na=1]
	[  +0.001043] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=0000000050f0f017
	[  +0.001132] FS-Cache: N-key=[8] '43dcc90000000000'
	[  +0.015464] FS-Cache: Duplicate cookie detected
	[  +0.000793] FS-Cache: O-cookie c=00000084 [p=00000081 fl=226 nc=0 na=1]
	[  +0.001074] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=000000001a96c964
	[  +0.001195] FS-Cache: O-key=[8] '43dcc90000000000'
	[  +0.000785] FS-Cache: N-cookie c=0000008b [p=00000081 fl=2 nc=0 na=1]
	[  +0.001061] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=00000000c3f1ef24
	[  +0.001122] FS-Cache: N-key=[8] '43dcc90000000000'
	[  +3.363678] FS-Cache: Duplicate cookie detected
	[  +0.000885] FS-Cache: O-cookie c=00000082 [p=00000081 fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=000000008d08245a
	[  +0.001139] FS-Cache: O-key=[8] '42dcc90000000000'
	[  +0.000773] FS-Cache: N-cookie c=0000008d [p=00000081 fl=2 nc=0 na=1]
	[  +0.001050] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000001c5f707d
	[  +0.001145] FS-Cache: N-key=[8] '42dcc90000000000'
	[  +0.416467] FS-Cache: Duplicate cookie detected
	[  +0.000781] FS-Cache: O-cookie c=00000087 [p=00000081 fl=226 nc=0 na=1]
	[  +0.001036] FS-Cache: O-cookie d=00000000e15ff1bd{9p.inode} n=00000000d340b884
	[  +0.001246] FS-Cache: O-key=[8] '48dcc90000000000'
	[  +0.000732] FS-Cache: N-cookie c=0000008e [p=00000081 fl=2 nc=0 na=1]
	[  +0.001016] FS-Cache: N-cookie d=00000000e15ff1bd{9p.inode} n=000000008ae39b39
	[  +0.001067] FS-Cache: N-key=[8] '48dcc90000000000'
	
	
	==> etcd [579addb39af6dd5a1a2112c67a3387172c9f1021cb97e61324839a8fb7df105d] <==
	raft2024/01/16 03:05:21 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/16 03:05:21 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/16 03:05:21 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/16 03:05:21 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-16 03:05:21.671862 W | auth: simple token is not cryptographically signed
	2024-01-16 03:05:21.675679 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-16 03:05:21.680497 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 03:05:21.680849 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 03:05:21.681164 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-16 03:05:21.681410 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/16 03:05:21 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-16 03:05:21.682058 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2024/01/16 03:05:22 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/16 03:05:22 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/16 03:05:22 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/16 03:05:22 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/16 03:05:22 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-16 03:05:22.338481 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-16 03:05:22.339153 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-16 03:05:22.339331 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-16 03:05:22.339432 I | etcdserver: published {Name:ingress-addon-legacy-846462 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-16 03:05:22.339673 I | embed: ready to serve client requests
	2024-01-16 03:05:22.341570 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-16 03:05:22.341785 I | embed: ready to serve client requests
	2024-01-16 03:05:22.343134 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 03:07:11 up  9:49,  0 users,  load average: 0.71, 1.47, 1.87
	Linux ingress-addon-legacy-846462 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [218965e0989cb992781f27f0f3d58675cb264570865d054aed884f9ffa40e7b4] <==
	I0116 03:05:48.219072       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0116 03:05:48.219144       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0116 03:05:48.219266       1 main.go:116] setting mtu 1500 for CNI 
	I0116 03:05:48.219276       1 main.go:146] kindnetd IP family: "ipv4"
	I0116 03:05:48.219286       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0116 03:05:48.619760       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:05:48.619795       1 main.go:227] handling current node
	I0116 03:05:58.632731       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:05:58.632771       1 main.go:227] handling current node
	I0116 03:06:08.644769       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:06:08.644797       1 main.go:227] handling current node
	I0116 03:06:18.649201       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:06:18.649234       1 main.go:227] handling current node
	I0116 03:06:28.656656       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:06:28.656686       1 main.go:227] handling current node
	I0116 03:06:38.661875       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:06:38.661901       1 main.go:227] handling current node
	I0116 03:06:48.666869       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:06:48.666993       1 main.go:227] handling current node
	I0116 03:06:58.680030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:06:58.680061       1 main.go:227] handling current node
	I0116 03:07:08.689913       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:07:08.689941       1 main.go:227] handling current node
	
	
	==> kube-apiserver [b8641b81c05ce1f2f5e0cf08ec13e62d5e7f2262ec39d512dabebfe91eabab2a] <==
	I0116 03:05:26.136616       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0116 03:05:26.173010       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0116 03:05:26.302331       1 cache.go:39] Caches are synced for autoregister controller
	I0116 03:05:26.304398       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 03:05:26.304965       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0116 03:05:26.308844       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 03:05:26.342498       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0116 03:05:27.096744       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0116 03:05:27.096774       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0116 03:05:27.105532       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0116 03:05:27.109591       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0116 03:05:27.109613       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0116 03:05:27.494240       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 03:05:27.541409       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0116 03:05:27.653234       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0116 03:05:27.654310       1 controller.go:609] quota admission added evaluator for: endpoints
	I0116 03:05:27.658052       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 03:05:28.494669       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0116 03:05:29.129002       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0116 03:05:29.223368       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0116 03:05:32.508693       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 03:05:45.586396       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0116 03:05:45.808611       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0116 03:06:08.287093       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0116 03:06:30.386415       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [05104473460b1fa9800f6cd5b227777bfdf2a4082a32dfa966bedd841bd237dc] <==
	I0116 03:05:45.847168       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0116 03:05:45.847211       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-846462. Assuming now as a timestamp.
	I0116 03:05:45.847253       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0116 03:05:45.847513       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0116 03:05:45.847859       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-846462", UID:"a640fa93-7adb-4894-b311-64dcfea92eca", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-846462 event: Registered Node ingress-addon-legacy-846462 in Controller
	I0116 03:05:45.854989       1 shared_informer.go:230] Caches are synced for TTL 
	I0116 03:05:45.859810       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 03:05:45.893526       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0116 03:05:45.894058       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0116 03:05:45.894249       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 03:05:45.933071       1 shared_informer.go:230] Caches are synced for stateful set 
	I0116 03:05:45.942222       1 shared_informer.go:230] Caches are synced for disruption 
	I0116 03:05:45.942250       1 disruption.go:339] Sending events to api server.
	I0116 03:05:45.943329       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 03:05:45.953118       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 03:05:45.953152       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0116 03:06:08.270195       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"1072a196-ed5c-492b-976e-2acb89c4a943", APIVersion:"apps/v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0116 03:06:08.291291       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"2b29b067-1484-4c3c-975b-eaf9bc9a78c0", APIVersion:"apps/v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-l5vxq
	I0116 03:06:08.337177       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"2a89dc28-5f52-43d0-b50a-9fe725bb7330", APIVersion:"batch/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-9nbj2
	I0116 03:06:08.399638       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"53529e8a-0893-4c5f-ab64-f54d81e4f925", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-t467g
	I0116 03:06:10.734414       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"53529e8a-0893-4c5f-ab64-f54d81e4f925", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 03:06:10.760660       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"2a89dc28-5f52-43d0-b50a-9fe725bb7330", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 03:06:39.161742       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"665c3261-9fef-4600-a945-cb6a19f86e40", APIVersion:"apps/v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0116 03:06:39.168821       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"7107df92-c848-480a-ab98-688730710ad9", APIVersion:"apps/v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-gv7kr
	E0116 03:07:08.204166       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-k7dkt" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [547c7028bddf9497424e1130387f6e87fb514f2051aaa0a94771fa5c45d2a60e] <==
	W0116 03:05:46.628557       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0116 03:05:46.641629       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0116 03:05:46.641832       1 server_others.go:186] Using iptables Proxier.
	I0116 03:05:46.642821       1 server.go:583] Version: v1.18.20
	I0116 03:05:46.645068       1 config.go:315] Starting service config controller
	I0116 03:05:46.645224       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0116 03:05:46.645547       1 config.go:133] Starting endpoints config controller
	I0116 03:05:46.645670       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0116 03:05:46.745549       1 shared_informer.go:230] Caches are synced for service config 
	I0116 03:05:46.746142       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [5ecdbfbc0f6078853e5770a54e54af2220bb2efec0d0f9f400744975b983eac9] <==
	W0116 03:05:26.238715       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:05:26.315260       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 03:05:26.315451       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 03:05:26.317760       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0116 03:05:26.320274       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0116 03:05:26.322473       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:05:26.322629       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0116 03:05:26.323213       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:05:26.323300       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:05:26.323382       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:05:26.323460       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:05:26.323534       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:05:26.323609       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:05:26.323677       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:05:26.323746       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:05:26.323865       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:05:26.324636       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:05:26.324725       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:05:26.324877       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:05:27.200067       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:05:27.230559       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:05:27.247656       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:05:27.334662       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:05:27.343227       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0116 03:05:27.622881       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 16 03:06:55 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:06:55.279389    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-69hfr" (UniqueName: "kubernetes.io/secret/794d8e8c-aedf-401e-9883-a291d91ce665-minikube-ingress-dns-token-69hfr") pod "794d8e8c-aedf-401e-9883-a291d91ce665" (UID: "794d8e8c-aedf-401e-9883-a291d91ce665")
	Jan 16 03:06:55 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:06:55.285948    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794d8e8c-aedf-401e-9883-a291d91ce665-minikube-ingress-dns-token-69hfr" (OuterVolumeSpecName: "minikube-ingress-dns-token-69hfr") pod "794d8e8c-aedf-401e-9883-a291d91ce665" (UID: "794d8e8c-aedf-401e-9883-a291d91ce665"). InnerVolumeSpecName "minikube-ingress-dns-token-69hfr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 03:06:55 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:06:55.379759    1630 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-69hfr" (UniqueName: "kubernetes.io/secret/794d8e8c-aedf-401e-9883-a291d91ce665-minikube-ingress-dns-token-69hfr") on node "ingress-addon-legacy-846462" DevicePath ""
	Jan 16 03:06:55 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:06:55.615039    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3aff568df50aa82e253d2e9019d9546b730a36c8e65978f53a2079c08370c994
	Jan 16 03:06:55 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:06:55.859736    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3aff568df50aa82e253d2e9019d9546b730a36c8e65978f53a2079c08370c994
	Jan 16 03:06:55 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:06:55.860106    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7e37fc014c3a27b11a1df8306b18f351b24c761461a503013332b622111351ca
	Jan 16 03:06:55 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:06:55.860394    1630 pod_workers.go:191] Error syncing pod f4f27b11-f847-49c8-a558-3e870fe84abf ("hello-world-app-5f5d8b66bb-gv7kr_default(f4f27b11-f847-49c8-a558-3e870fe84abf)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-gv7kr_default(f4f27b11-f847-49c8-a558-3e870fe84abf)"
	Jan 16 03:06:56 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:06:56.863171    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 42dd2e6d7e943009b03d51c726fe54e7b2aca5c461090cc052365a5a8043ee2a
	Jan 16 03:07:03 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:07:03.551519    1630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-l5vxq.17aab500b413327c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-l5vxq", UID:"99f06d5c-08fa-4954-8d39-d81762b9c838", APIVersion:"v1", ResourceVersion:"463", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-846462"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1619a95e01f8c7c, ext:94471471954, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1619a95e01f8c7c, ext:94471471954, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-l5vxq.17aab500b413327c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 03:07:03 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:07:03.571152    1630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-l5vxq.17aab500b413327c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-l5vxq", UID:"99f06d5c-08fa-4954-8d39-d81762b9c838", APIVersion:"v1", ResourceVersion:"463", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-846462"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1619a95e01f8c7c, ext:94471471954, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1619a95e0e4a3f6, ext:94484388556, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-l5vxq.17aab500b413327c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 03:07:05 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:07:05.761289    1630 remote_runtime.go:128] StopPodSandbox "66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5": plugin type="portmap" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-9c81b3e0f6dde96988846 --wait]: exit status 1: iptables: No chain/target/match by that name.
	Jan 16 03:07:05 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:07:05.761362    1630 kuberuntime_manager.go:912] Failed to stop sandbox {"containerd" "66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5"}
	Jan 16 03:07:05 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:07:05.762433    1630 kubelet.go:1598] error killing pod: failed to "KillPodSandbox" for "99f06d5c-08fa-4954-8d39-d81762b9c838" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-9c81b3e0f6dde96988846 --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Jan 16 03:07:05 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:07:05.762478    1630 pod_workers.go:191] Error syncing pod 99f06d5c-08fa-4954-8d39-d81762b9c838 ("ingress-nginx-controller-7fcf777cb7-l5vxq_ingress-nginx(99f06d5c-08fa-4954-8d39-d81762b9c838)"), skipping: error killing pod: failed to "KillPodSandbox" for "99f06d5c-08fa-4954-8d39-d81762b9c838" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-9c81b3e0f6dde96988846 --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Jan 16 03:07:05 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:07:05.770683    1630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-l5vxq.17aab501389a9d6c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-l5vxq", UID:"99f06d5c-08fa-4954-8d39-d81762b9c838", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}, Reason:"FailedKillPod", Message:"error killing pod: failed t
o \"KillPodSandbox\" for \"99f06d5c-08fa-4954-8d39-d81762b9c838\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5\\\": plugin type=\\\"portmap\\\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-9c81b3e0f6dde96988846 --wait]: exit status 1: iptables: No chain/target/match by that name.\\n\"", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-846462"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1619a966d71636c, ext:96694939210, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1619a966d71636c, ext:96694939210, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx
-controller-7fcf777cb7-l5vxq.17aab501389a9d6c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 03:07:05 ingress-addon-legacy-846462 kubelet[1630]: W0116 03:07:05.884555    1630 pod_container_deletor.go:77] Container "66d5b87b16d3cfa4bd5cd614452794ecd29f351f1610b4a277c0d93c10dec5e5" not found in pod's containers
	Jan 16 03:07:07 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:07:07.616179    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/99f06d5c-08fa-4954-8d39-d81762b9c838-webhook-cert") pod "99f06d5c-08fa-4954-8d39-d81762b9c838" (UID: "99f06d5c-08fa-4954-8d39-d81762b9c838")
	Jan 16 03:07:07 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:07:07.616245    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-d95bw" (UniqueName: "kubernetes.io/secret/99f06d5c-08fa-4954-8d39-d81762b9c838-ingress-nginx-token-d95bw") pod "99f06d5c-08fa-4954-8d39-d81762b9c838" (UID: "99f06d5c-08fa-4954-8d39-d81762b9c838")
	Jan 16 03:07:07 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:07:07.622390    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f06d5c-08fa-4954-8d39-d81762b9c838-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "99f06d5c-08fa-4954-8d39-d81762b9c838" (UID: "99f06d5c-08fa-4954-8d39-d81762b9c838"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 03:07:07 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:07:07.622921    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f06d5c-08fa-4954-8d39-d81762b9c838-ingress-nginx-token-d95bw" (OuterVolumeSpecName: "ingress-nginx-token-d95bw") pod "99f06d5c-08fa-4954-8d39-d81762b9c838" (UID: "99f06d5c-08fa-4954-8d39-d81762b9c838"). InnerVolumeSpecName "ingress-nginx-token-d95bw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 03:07:07 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:07:07.716525    1630 reconciler.go:319] Volume detached for volume "ingress-nginx-token-d95bw" (UniqueName: "kubernetes.io/secret/99f06d5c-08fa-4954-8d39-d81762b9c838-ingress-nginx-token-d95bw") on node "ingress-addon-legacy-846462" DevicePath ""
	Jan 16 03:07:07 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:07:07.716730    1630 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/99f06d5c-08fa-4954-8d39-d81762b9c838-webhook-cert") on node "ingress-addon-legacy-846462" DevicePath ""
	Jan 16 03:07:08 ingress-addon-legacy-846462 kubelet[1630]: W0116 03:07:08.620275    1630 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/99f06d5c-08fa-4954-8d39-d81762b9c838/volumes" does not exist
	Jan 16 03:07:11 ingress-addon-legacy-846462 kubelet[1630]: I0116 03:07:11.614780    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7e37fc014c3a27b11a1df8306b18f351b24c761461a503013332b622111351ca
	Jan 16 03:07:11 ingress-addon-legacy-846462 kubelet[1630]: E0116 03:07:11.615062    1630 pod_workers.go:191] Error syncing pod f4f27b11-f847-49c8-a558-3e870fe84abf ("hello-world-app-5f5d8b66bb-gv7kr_default(f4f27b11-f847-49c8-a558-3e870fe84abf)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-gv7kr_default(f4f27b11-f847-49c8-a558-3e870fe84abf)"
	
	
	==> storage-provisioner [95be1151d8ac9cdfea3982e9b0064e1de4eb4e9eb785f46a229bb5767b59fae2] <==
	I0116 03:05:49.253405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:05:49.266543       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:05:49.266710       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:05:49.279939       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:05:49.280398       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"177b9051-b875-4979-9753-6bb63bf711f5", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-846462_44613fa5-9fbe-4c61-a749-254d4670c2a3 became leader
	I0116 03:05:49.281623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-846462_44613fa5-9fbe-4c61-a749-254d4670c2a3!
	I0116 03:05:49.381823       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-846462_44613fa5-9fbe-4c61-a749-254d4670c2a3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-846462 -n ingress-addon-legacy-846462
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-846462 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (55.96s)

                                                
                                    

Test pass (281/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.76
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
9 TestDownloadOnly/v1.16.0/DeleteAll 13.39
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.28.4/json-events 19
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 13.38
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.17
21 TestDownloadOnly/v1.29.0-rc.2/json-events 14.52
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 13.39
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
36 TestAddons/Setup 124.25
38 TestAddons/parallel/Registry 15.74
40 TestAddons/parallel/InspektorGadget 10.94
41 TestAddons/parallel/MetricsServer 6.82
44 TestAddons/parallel/CSI 89.95
45 TestAddons/parallel/Headlamp 11.53
47 TestAddons/parallel/LocalPath 51.55
48 TestAddons/parallel/NvidiaDevicePlugin 5.64
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.19
53 TestAddons/StoppedEnableDisable 12.36
54 TestCertOptions 35.32
55 TestCertExpiration 226.26
57 TestForceSystemdFlag 39.33
58 TestForceSystemdEnv 52.65
59 TestDockerEnvContainerd 47.36
64 TestErrorSpam/setup 33.65
65 TestErrorSpam/start 0.87
66 TestErrorSpam/status 1.11
67 TestErrorSpam/pause 1.85
68 TestErrorSpam/unpause 1.94
69 TestErrorSpam/stop 1.51
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 61.3
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.93
76 TestFunctional/serial/KubeContext 0.08
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.07
81 TestFunctional/serial/CacheCmd/cache/add_local 1.47
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.24
86 TestFunctional/serial/CacheCmd/cache/delete 0.15
87 TestFunctional/serial/MinikubeKubectlCmd 0.17
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 40.27
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.8
92 TestFunctional/serial/LogsFileCmd 1.85
93 TestFunctional/serial/InvalidService 6.56
95 TestFunctional/parallel/ConfigCmd 0.63
96 TestFunctional/parallel/DashboardCmd 8.16
97 TestFunctional/parallel/DryRun 0.79
98 TestFunctional/parallel/InternationalLanguage 0.22
99 TestFunctional/parallel/StatusCmd 1.2
103 TestFunctional/parallel/ServiceCmdConnect 9.76
104 TestFunctional/parallel/AddonsCmd 0.29
105 TestFunctional/parallel/PersistentVolumeClaim 23.59
107 TestFunctional/parallel/SSHCmd 0.79
108 TestFunctional/parallel/CpCmd 2.8
110 TestFunctional/parallel/FileSync 0.43
111 TestFunctional/parallel/CertSync 2.45
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
119 TestFunctional/parallel/License 0.41
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.75
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 6.25
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
133 TestFunctional/parallel/ProfileCmd/profile_list 0.46
134 TestFunctional/parallel/ServiceCmd/List 0.68
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
136 TestFunctional/parallel/MountCmd/any-port 8.86
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.8
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
139 TestFunctional/parallel/ServiceCmd/Format 0.56
140 TestFunctional/parallel/ServiceCmd/URL 0.61
141 TestFunctional/parallel/MountCmd/specific-port 2.38
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
143 TestFunctional/parallel/Version/short 0.1
144 TestFunctional/parallel/Version/components 0.9
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.73
150 TestFunctional/parallel/ImageCommands/Setup 1.82
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
161 TestFunctional/delete_addon-resizer_images 0.09
162 TestFunctional/delete_my-image_image 0.03
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 94.61
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.52
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
174 TestJSONOutput/start/Command 83.2
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.83
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.77
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.9
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.26
199 TestKicCustomNetwork/create_custom_network 46.67
200 TestKicCustomNetwork/use_default_bridge_network 35.06
201 TestKicExistingNetwork 37.68
202 TestKicCustomSubnet 34.19
203 TestKicStaticIP 38.42
204 TestMainNoArgs 0.07
205 TestMinikubeProfile 68.33
208 TestMountStart/serial/StartWithMountFirst 6.91
209 TestMountStart/serial/VerifyMountFirst 0.3
210 TestMountStart/serial/StartWithMountSecond 7.67
211 TestMountStart/serial/VerifyMountSecond 0.3
212 TestMountStart/serial/DeleteFirst 1.66
213 TestMountStart/serial/VerifyMountPostDelete 0.29
214 TestMountStart/serial/Stop 1.22
215 TestMountStart/serial/RestartStopped 8.26
216 TestMountStart/serial/VerifyMountPostStop 0.3
219 TestMultiNode/serial/FreshStart2Nodes 75.22
220 TestMultiNode/serial/DeployApp2Nodes 4.77
221 TestMultiNode/serial/PingHostFrom2Pods 1.16
222 TestMultiNode/serial/AddNode 17.43
223 TestMultiNode/serial/MultiNodeLabels 0.1
224 TestMultiNode/serial/ProfileList 0.36
225 TestMultiNode/serial/CopyFile 11.45
226 TestMultiNode/serial/StopNode 2.37
227 TestMultiNode/serial/StartAfterStop 11.89
228 TestMultiNode/serial/RestartKeepsNodes 124.6
229 TestMultiNode/serial/DeleteNode 5.16
230 TestMultiNode/serial/StopMultiNode 24.19
231 TestMultiNode/serial/RestartMultiNode 88.13
232 TestMultiNode/serial/ValidateNameConflict 37.97
237 TestPreload 160.95
239 TestScheduledStopUnix 106.4
242 TestInsufficientStorage 12.89
243 TestRunningBinaryUpgrade 81.99
245 TestKubernetesUpgrade 377.93
246 TestMissingContainerUpgrade 171.98
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
249 TestNoKubernetes/serial/StartWithK8s 43.12
250 TestNoKubernetes/serial/StartWithStopK8s 17.08
251 TestNoKubernetes/serial/Start 7.36
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
253 TestNoKubernetes/serial/ProfileList 1.14
254 TestNoKubernetes/serial/Stop 1.34
255 TestNoKubernetes/serial/StartNoArgs 7.98
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.5
257 TestStoppedBinaryUpgrade/Setup 1.13
258 TestStoppedBinaryUpgrade/Upgrade 104.19
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
268 TestPause/serial/Start 83.79
269 TestPause/serial/SecondStartNoReconfiguration 8.31
270 TestPause/serial/Pause 1.07
271 TestPause/serial/VerifyStatus 0.47
272 TestPause/serial/Unpause 1
273 TestPause/serial/PauseAgain 1.21
274 TestPause/serial/DeletePaused 3.55
275 TestPause/serial/VerifyDeletedResources 0.21
283 TestNetworkPlugins/group/false 5.41
288 TestStartStop/group/old-k8s-version/serial/FirstStart 114.7
289 TestStartStop/group/old-k8s-version/serial/DeployApp 8.54
290 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
291 TestStartStop/group/old-k8s-version/serial/Stop 12.11
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
293 TestStartStop/group/old-k8s-version/serial/SecondStart 661.71
295 TestStartStop/group/no-preload/serial/FirstStart 69.8
296 TestStartStop/group/no-preload/serial/DeployApp 8.38
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
298 TestStartStop/group/no-preload/serial/Stop 12.18
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
300 TestStartStop/group/no-preload/serial/SecondStart 340.8
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
303 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
304 TestStartStop/group/no-preload/serial/Pause 3.43
306 TestStartStop/group/embed-certs/serial/FirstStart 95.92
307 TestStartStop/group/embed-certs/serial/DeployApp 8.34
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
309 TestStartStop/group/embed-certs/serial/Stop 12.16
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
311 TestStartStop/group/embed-certs/serial/SecondStart 337.54
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
315 TestStartStop/group/old-k8s-version/serial/Pause 3.55
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.03
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.29
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.14
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 344.3
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
326 TestStartStop/group/embed-certs/serial/Pause 3.52
328 TestStartStop/group/newest-cni/serial/FirstStart 48.69
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
331 TestStartStop/group/newest-cni/serial/Stop 1.29
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
333 TestStartStop/group/newest-cni/serial/SecondStart 31.4
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
337 TestStartStop/group/newest-cni/serial/Pause 3.42
338 TestNetworkPlugins/group/auto/Start 86.57
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
340 TestNetworkPlugins/group/auto/KubeletFlags 0.46
341 TestNetworkPlugins/group/auto/NetCatPod 9.34
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
343 TestNetworkPlugins/group/auto/DNS 0.19
344 TestNetworkPlugins/group/auto/Localhost 0.17
345 TestNetworkPlugins/group/auto/HairPin 0.17
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.47
348 TestNetworkPlugins/group/kindnet/Start 94
349 TestNetworkPlugins/group/calico/Start 80.55
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/calico/ControllerPod 6.01
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
353 TestNetworkPlugins/group/kindnet/NetCatPod 8.27
354 TestNetworkPlugins/group/calico/KubeletFlags 0.37
355 TestNetworkPlugins/group/calico/NetCatPod 9.3
356 TestNetworkPlugins/group/kindnet/DNS 0.32
357 TestNetworkPlugins/group/kindnet/Localhost 0.22
358 TestNetworkPlugins/group/kindnet/HairPin 0.17
359 TestNetworkPlugins/group/calico/DNS 0.31
360 TestNetworkPlugins/group/calico/Localhost 0.24
361 TestNetworkPlugins/group/calico/HairPin 0.22
362 TestNetworkPlugins/group/custom-flannel/Start 59.9
363 TestNetworkPlugins/group/enable-default-cni/Start 88.67
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.31
366 TestNetworkPlugins/group/custom-flannel/DNS 0.18
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
369 TestNetworkPlugins/group/flannel/Start 63.75
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.49
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.57
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
375 TestNetworkPlugins/group/bridge/Start 87.66
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
378 TestNetworkPlugins/group/flannel/NetCatPod 10.41
379 TestNetworkPlugins/group/flannel/DNS 0.18
380 TestNetworkPlugins/group/flannel/Localhost 0.2
381 TestNetworkPlugins/group/flannel/HairPin 0.19
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
383 TestNetworkPlugins/group/bridge/NetCatPod 9.24
384 TestNetworkPlugins/group/bridge/DNS 0.18
385 TestNetworkPlugins/group/bridge/Localhost 0.15
386 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (16.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-807644 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-807644 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.761534385s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-807644
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-807644: exit status 85 (98.836195ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-807644 | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC |          |
	|         | -p download-only-807644        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:53:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:53:20.898046 1891170 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:53:20.898199 1891170 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:53:20.898223 1891170 out.go:309] Setting ErrFile to fd 2...
	I0116 02:53:20.898229 1891170 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:53:20.898499 1891170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	W0116 02:53:20.898654 1891170 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17967-1885793/.minikube/config/config.json: open /home/jenkins/minikube-integration/17967-1885793/.minikube/config/config.json: no such file or directory
	I0116 02:53:20.899215 1891170 out.go:303] Setting JSON to true
	I0116 02:53:20.900107 1891170 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34537,"bootTime":1705339064,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 02:53:20.900178 1891170 start.go:138] virtualization:  
	I0116 02:53:20.903503 1891170 out.go:97] [download-only-807644] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 02:53:20.905511 1891170 out.go:169] MINIKUBE_LOCATION=17967
	W0116 02:53:20.903741 1891170 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 02:53:20.903821 1891170 notify.go:220] Checking for updates...
	I0116 02:53:20.907469 1891170 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:53:20.909625 1891170 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 02:53:20.911371 1891170 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 02:53:20.913294 1891170 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 02:53:20.917146 1891170 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:53:20.917390 1891170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:53:20.941773 1891170 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:53:20.941904 1891170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:53:21.022692 1891170 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-01-16 02:53:21.011303223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:53:21.022799 1891170 docker.go:295] overlay module found
	I0116 02:53:21.025069 1891170 out.go:97] Using the docker driver based on user configuration
	I0116 02:53:21.025095 1891170 start.go:298] selected driver: docker
	I0116 02:53:21.025101 1891170 start.go:902] validating driver "docker" against <nil>
	I0116 02:53:21.025215 1891170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:53:21.094488 1891170 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-01-16 02:53:21.084997286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:53:21.094671 1891170 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:53:21.094969 1891170 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 02:53:21.095184 1891170 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:53:21.097104 1891170 out.go:169] Using Docker driver with root privileges
	I0116 02:53:21.099032 1891170 cni.go:84] Creating CNI manager for ""
	I0116 02:53:21.099053 1891170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:53:21.099066 1891170 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:53:21.099080 1891170 start_flags.go:321] config:
	{Name:download-only-807644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-807644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:53:21.100993 1891170 out.go:97] Starting control plane node download-only-807644 in cluster download-only-807644
	I0116 02:53:21.101013 1891170 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0116 02:53:21.103024 1891170 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:53:21.103048 1891170 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0116 02:53:21.103213 1891170 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:53:21.121023 1891170 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 02:53:21.121046 1891170 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:53:21.121281 1891170 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 02:53:21.121524 1891170 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:53:21.176246 1891170 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0116 02:53:21.176285 1891170 cache.go:56] Caching tarball of preloaded images
	I0116 02:53:21.176470 1891170 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0116 02:53:21.178977 1891170 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 02:53:21.179003 1891170 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0116 02:53:21.286995 1891170 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0116 02:53:25.641914 1891170 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 02:53:35.652723 1891170 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0116 02:53:35.652823 1891170 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-807644"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (13.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.387054216s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (13.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-807644
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-111300 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-111300 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (18.998563358s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (19.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-111300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-111300: exit status 85 (92.532211ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-807644 | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC |                     |
	|         | -p download-only-807644        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC | 16 Jan 24 02:53 UTC |
	| delete  | -p download-only-807644        | download-only-807644 | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC | 16 Jan 24 02:53 UTC |
	| start   | -o=json --download-only        | download-only-111300 | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC |                     |
	|         | -p download-only-111300        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:53:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:53:51.309739 1891377 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:53:51.309935 1891377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:53:51.309944 1891377 out.go:309] Setting ErrFile to fd 2...
	I0116 02:53:51.309950 1891377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:53:51.310248 1891377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 02:53:51.310707 1891377 out.go:303] Setting JSON to true
	I0116 02:53:51.311594 1891377 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34568,"bootTime":1705339064,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 02:53:51.311666 1891377 start.go:138] virtualization:  
	I0116 02:53:51.314005 1891377 out.go:97] [download-only-111300] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 02:53:51.316097 1891377 out.go:169] MINIKUBE_LOCATION=17967
	I0116 02:53:51.314315 1891377 notify.go:220] Checking for updates...
	I0116 02:53:51.320416 1891377 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:53:51.322744 1891377 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 02:53:51.324446 1891377 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 02:53:51.326359 1891377 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 02:53:51.329689 1891377 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:53:51.329971 1891377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:53:51.354222 1891377 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:53:51.354341 1891377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:53:51.448638 1891377 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-16 02:53:51.438795134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:53:51.448742 1891377 docker.go:295] overlay module found
	I0116 02:53:51.450896 1891377 out.go:97] Using the docker driver based on user configuration
	I0116 02:53:51.450923 1891377 start.go:298] selected driver: docker
	I0116 02:53:51.450930 1891377 start.go:902] validating driver "docker" against <nil>
	I0116 02:53:51.451034 1891377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:53:51.514998 1891377 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-16 02:53:51.506026908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:53:51.515153 1891377 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:53:51.515420 1891377 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 02:53:51.515589 1891377 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:53:51.517750 1891377 out.go:169] Using Docker driver with root privileges
	I0116 02:53:51.519965 1891377 cni.go:84] Creating CNI manager for ""
	I0116 02:53:51.519988 1891377 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:53:51.520002 1891377 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:53:51.520018 1891377 start_flags.go:321] config:
	{Name:download-only-111300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-111300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:53:51.522084 1891377 out.go:97] Starting control plane node download-only-111300 in cluster download-only-111300
	I0116 02:53:51.522105 1891377 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0116 02:53:51.523834 1891377 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:53:51.523856 1891377 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:53:51.523955 1891377 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:53:51.540926 1891377 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 02:53:51.540948 1891377 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:53:51.541078 1891377 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 02:53:51.541097 1891377 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 02:53:51.541101 1891377 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 02:53:51.541109 1891377 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 02:53:51.600083 1891377 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0116 02:53:51.600104 1891377 cache.go:56] Caching tarball of preloaded images
	I0116 02:53:51.600760 1891377 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 02:53:51.603188 1891377 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0116 02:53:51.603210 1891377 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0116 02:53:51.712583 1891377 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-111300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (13.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.383114371s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (13.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-111300
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (14.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-795548 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-795548 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.516671336s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (14.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-795548
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-795548: exit status 85 (91.953701ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-807644 | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC |                     |
	|         | -p download-only-807644           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC | 16 Jan 24 02:53 UTC |
	| delete  | -p download-only-807644           | download-only-807644 | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC | 16 Jan 24 02:53 UTC |
	| start   | -o=json --download-only           | download-only-111300 | jenkins | v1.32.0 | 16 Jan 24 02:53 UTC |                     |
	|         | -p download-only-111300           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| delete  | -p download-only-111300           | download-only-111300 | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC | 16 Jan 24 02:54 UTC |
	| start   | -o=json --download-only           | download-only-795548 | jenkins | v1.32.0 | 16 Jan 24 02:54 UTC |                     |
	|         | -p download-only-795548           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:54:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:54:23.951616 1891587 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:54:23.951761 1891587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:54:23.951769 1891587 out.go:309] Setting ErrFile to fd 2...
	I0116 02:54:23.951775 1891587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:54:23.952019 1891587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 02:54:23.952431 1891587 out.go:303] Setting JSON to true
	I0116 02:54:23.953250 1891587 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34600,"bootTime":1705339064,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 02:54:23.953322 1891587 start.go:138] virtualization:  
	I0116 02:54:23.956065 1891587 out.go:97] [download-only-795548] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 02:54:23.958031 1891587 out.go:169] MINIKUBE_LOCATION=17967
	I0116 02:54:23.956381 1891587 notify.go:220] Checking for updates...
	I0116 02:54:23.960374 1891587 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:54:23.962742 1891587 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 02:54:23.964578 1891587 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 02:54:23.966316 1891587 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 02:54:23.969811 1891587 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:54:23.970160 1891587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:54:23.995222 1891587 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 02:54:23.995344 1891587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:54:24.108432 1891587 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-16 02:54:24.098388314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:54:24.108552 1891587 docker.go:295] overlay module found
	I0116 02:54:24.119868 1891587 out.go:97] Using the docker driver based on user configuration
	I0116 02:54:24.119924 1891587 start.go:298] selected driver: docker
	I0116 02:54:24.119931 1891587 start.go:902] validating driver "docker" against <nil>
	I0116 02:54:24.120046 1891587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 02:54:24.191036 1891587 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-16 02:54:24.180875012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 02:54:24.191194 1891587 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:54:24.191493 1891587 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 02:54:24.191670 1891587 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:54:24.203951 1891587 out.go:169] Using Docker driver with root privileges
	I0116 02:54:24.212571 1891587 cni.go:84] Creating CNI manager for ""
	I0116 02:54:24.212602 1891587 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0116 02:54:24.212617 1891587 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:54:24.212630 1891587 start_flags.go:321] config:
	{Name:download-only-795548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-795548 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:54:24.223749 1891587 out.go:97] Starting control plane node download-only-795548 in cluster download-only-795548
	I0116 02:54:24.223784 1891587 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0116 02:54:24.237463 1891587 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 02:54:24.237498 1891587 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0116 02:54:24.237673 1891587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 02:54:24.259860 1891587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 02:54:24.259885 1891587 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 02:54:24.260004 1891587 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 02:54:24.260023 1891587 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 02:54:24.260028 1891587 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 02:54:24.260036 1891587 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 02:54:24.310661 1891587 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0116 02:54:24.310695 1891587 cache.go:56] Caching tarball of preloaded images
	I0116 02:54:24.310868 1891587 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0116 02:54:24.322318 1891587 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0116 02:54:24.322355 1891587 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0116 02:54:24.444975 1891587 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/17967-1885793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-795548"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (13.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.391919474s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (13.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-795548
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-337521 --alsologtostderr --binary-mirror http://127.0.0.1:34529 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-337521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-337521
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-843965
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-843965: exit status 85 (98.108568ms)

                                                
                                                
-- stdout --
	* Profile "addons-843965" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-843965"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-843965
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-843965: exit status 85 (109.139514ms)

                                                
                                                
-- stdout --
	* Profile "addons-843965" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-843965"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (124.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-843965 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-843965 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m4.253674669s)
--- PASS: TestAddons/Setup (124.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 39.220481ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bzgv9" [af30b04d-da1d-4148-b183-4ca8c48dba30] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005926665s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sfv97" [224d6c6a-4fbd-415b-92b0-562bdde1b323] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004541631s
addons_test.go:340: (dbg) Run:  kubectl --context addons-843965 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-843965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-843965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.535799906s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 ip
2024/01/16 02:57:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j8jtx" [aed8462c-d197-4642-818a-5e24c69f91aa] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00459445s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-843965
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-843965: (5.929809505s)
--- PASS: TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.435704ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-cshtq" [e2de00b3-dd3e-4347-a94f-b186d7fe0fea] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004155895s
addons_test.go:415: (dbg) Run:  kubectl --context addons-843965 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (89.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 40.524125ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-843965 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-843965 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [64c159a7-1700-4175-8bba-08c4c83274cb] Pending
helpers_test.go:344: "task-pv-pod" [64c159a7-1700-4175-8bba-08c4c83274cb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [64c159a7-1700-4175-8bba-08c4c83274cb] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003346778s
addons_test.go:584: (dbg) Run:  kubectl --context addons-843965 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-843965 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-843965 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-843965 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-843965 delete pod task-pv-pod: (1.004104117s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-843965 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-843965 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-843965 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0c918577-5681-4924-b36b-0e64bdbb0299] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0c918577-5681-4924-b36b-0e64bdbb0299] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004024774s
addons_test.go:626: (dbg) Run:  kubectl --context addons-843965 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-843965 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-843965 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-843965 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.841243385s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (89.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-843965 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-843965 --alsologtostderr -v=1: (1.521145s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-q6vng" [872515d7-11a2-43a0-9d76-b287c38a1b42] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-q6vng" [872515d7-11a2-43a0-9d76-b287c38a1b42] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-q6vng" [872515d7-11a2-43a0-9d76-b287c38a1b42] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00484203s
--- PASS: TestAddons/parallel/Headlamp (11.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-843965 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-843965 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843965 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8601d0b8-4e06-4197-aaa7-fee037aa3330] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8601d0b8-4e06-4197-aaa7-fee037aa3330] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8601d0b8-4e06-4197-aaa7-fee037aa3330] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00409199s
addons_test.go:891: (dbg) Run:  kubectl --context addons-843965 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 ssh "cat /opt/local-path-provisioner/pvc-7b134c94-38a8-4396-b5f8-502ac0f0b814_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-843965 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-843965 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-843965 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-843965 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.311018202s)
--- PASS: TestAddons/parallel/LocalPath (51.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zlmrk" [da7bd62d-e415-4145-ad12-6feb7be5fe21] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005504599s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-843965
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-ccdsg" [a3f42df0-9002-4ebb-8887-f9afd315fce6] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003855416s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-843965 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-843965 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-843965
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-843965: (12.04390966s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-843965
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-843965
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-843965
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (35.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-566218 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-566218 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.235567062s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-566218 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-566218 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-566218 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-566218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-566218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-566218: (2.360976858s)
--- PASS: TestCertOptions (35.32s)

                                                
                                    
x
+
TestCertExpiration (226.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-295880 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0116 03:33:30.341091 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-295880 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.908796167s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-295880 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-295880 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.974721393s)
helpers_test.go:175: Cleaning up "cert-expiration-295880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-295880
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-295880: (2.376617642s)
--- PASS: TestCertExpiration (226.26s)

                                                
                                    
x
+
TestForceSystemdFlag (39.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-470141 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-470141 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.681891033s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-470141 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-470141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-470141
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-470141: (2.181096723s)
--- PASS: TestForceSystemdFlag (39.33s)

                                                
                                    
x
+
TestForceSystemdEnv (52.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-637798 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-637798 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (49.934824158s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-637798 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-637798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-637798
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-637798: (2.29830967s)
--- PASS: TestForceSystemdEnv (52.65s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.36s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-002719 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-002719 --driver=docker  --container-runtime=containerd: (30.912383809s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-002719"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-002719": (1.38189154s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UnRUwEzsoHFg/agent.1909281" SSH_AGENT_PID="1909282" DOCKER_HOST=ssh://docker@127.0.0.1:35028 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UnRUwEzsoHFg/agent.1909281" SSH_AGENT_PID="1909282" DOCKER_HOST=ssh://docker@127.0.0.1:35028 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UnRUwEzsoHFg/agent.1909281" SSH_AGENT_PID="1909282" DOCKER_HOST=ssh://docker@127.0.0.1:35028 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.716596543s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UnRUwEzsoHFg/agent.1909281" SSH_AGENT_PID="1909282" DOCKER_HOST=ssh://docker@127.0.0.1:35028 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-002719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-002719
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-002719: (2.033573136s)
--- PASS: TestDockerEnvContainerd (47.36s)

                                                
                                    
x
+
TestErrorSpam/setup (33.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-109615 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-109615 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-109615 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-109615 --driver=docker  --container-runtime=containerd: (33.648796709s)
--- PASS: TestErrorSpam/setup (33.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 pause
--- PASS: TestErrorSpam/pause (1.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 stop: (1.283915436s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109615 --log_dir /tmp/nospam-109615 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17967-1885793/.minikube/files/etc/test/nested/copy/1891165/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-060112 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0116 03:01:58.181296 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:01:58.187052 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:01:58.197359 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:01:58.217680 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:01:58.257928 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:01:58.338228 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:01:58.498583 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:01:58.819202 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:01:59.459495 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:02:00.739952 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:02:03.300143 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:02:08.420892 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:02:18.661719 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-060112 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m1.302223619s)
--- PASS: TestFunctional/serial/StartWithProxy (61.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-060112 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-060112 --alsologtostderr -v=8: (5.924749853s)
functional_test.go:659: soft start took 5.927802156s for "functional-060112" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-060112 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 cache add registry.k8s.io/pause:3.1: (1.512027558s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 cache add registry.k8s.io/pause:3.3: (1.329795186s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 cache add registry.k8s.io/pause:latest: (1.224679057s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-060112 /tmp/TestFunctionalserialCacheCmdcacheadd_local2166957052/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cache add minikube-local-cache-test:functional-060112
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cache delete minikube-local-cache-test:functional-060112
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-060112
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (336.927567ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 cache reload: (1.169035814s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 kubectl -- --context functional-060112 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-060112 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-060112 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0116 03:02:39.142563 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-060112 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.271248055s)
functional_test.go:757: restart took 40.271353906s for "functional-060112" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-060112 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 logs: (1.804510234s)
--- PASS: TestFunctional/serial/LogsCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 logs --file /tmp/TestFunctionalserialLogsFileCmd2330642944/001/logs.txt
E0116 03:03:20.102918 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 logs --file /tmp/TestFunctionalserialLogsFileCmd2330642944/001/logs.txt: (1.849305667s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (6.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-060112 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-060112
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-060112: exit status 115 (496.672489ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31344 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-060112 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-060112 delete -f testdata/invalidsvc.yaml: (2.83238549s)
--- PASS: TestFunctional/serial/InvalidService (6.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 config get cpus: exit status 14 (119.411555ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 config get cpus: exit status 14 (114.261951ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-060112 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-060112 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1922898: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-060112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-060112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (373.822331ms)

                                                
                                                
-- stdout --
	* [functional-060112] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:03:59.985534 1922486 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:03:59.985750 1922486 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:03:59.985775 1922486 out.go:309] Setting ErrFile to fd 2...
	I0116 03:03:59.985794 1922486 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:03:59.987412 1922486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 03:03:59.988237 1922486 out.go:303] Setting JSON to false
	I0116 03:03:59.989392 1922486 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35176,"bootTime":1705339064,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 03:03:59.989529 1922486 start.go:138] virtualization:  
	I0116 03:03:59.993357 1922486 out.go:177] * [functional-060112] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:03:59.996468 1922486 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:03:59.996445 1922486 notify.go:220] Checking for updates...
	I0116 03:04:00.000023 1922486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:04:00.022611 1922486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 03:04:00.024698 1922486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 03:04:00.038840 1922486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:04:00.041014 1922486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:04:00.063353 1922486 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:04:00.064082 1922486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:04:00.125613 1922486 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:04:00.125755 1922486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:04:00.260456 1922486 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2024-01-16 03:04:00.244775684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:04:00.260574 1922486 docker.go:295] overlay module found
	I0116 03:04:00.262831 1922486 out.go:177] * Using the docker driver based on existing profile
	I0116 03:04:00.264541 1922486 start.go:298] selected driver: docker
	I0116 03:04:00.264561 1922486 start.go:902] validating driver "docker" against &{Name:functional-060112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-060112 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:04:00.264740 1922486 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:04:00.267404 1922486 out.go:177] 
	W0116 03:04:00.269631 1922486 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 03:04:00.272163 1922486 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-060112 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-060112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-060112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (220.494251ms)

                                                
                                                
-- stdout --
	* [functional-060112] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:03:59.760751 1922445 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:03:59.760892 1922445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:03:59.760903 1922445 out.go:309] Setting ErrFile to fd 2...
	I0116 03:03:59.760909 1922445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:03:59.761928 1922445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 03:03:59.762387 1922445 out.go:303] Setting JSON to false
	I0116 03:03:59.763604 1922445 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35176,"bootTime":1705339064,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 03:03:59.763747 1922445 start.go:138] virtualization:  
	I0116 03:03:59.767351 1922445 out.go:177] * [functional-060112] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0116 03:03:59.769057 1922445 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:03:59.770625 1922445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:03:59.769229 1922445 notify.go:220] Checking for updates...
	I0116 03:03:59.774687 1922445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 03:03:59.776757 1922445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 03:03:59.778436 1922445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:03:59.780120 1922445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:03:59.782781 1922445 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:03:59.783307 1922445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:03:59.807773 1922445 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:03:59.807916 1922445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:03:59.887432 1922445 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2024-01-16 03:03:59.876948422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:03:59.887536 1922445 docker.go:295] overlay module found
	I0116 03:03:59.889543 1922445 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0116 03:03:59.891738 1922445 start.go:298] selected driver: docker
	I0116 03:03:59.891783 1922445 start.go:902] validating driver "docker" against &{Name:functional-060112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-060112 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:03:59.891877 1922445 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:03:59.894654 1922445 out.go:177] 
	W0116 03:03:59.896656 1922445 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 03:03:59.898453 1922445 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-060112 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-060112 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6wfdp" [744ea6b3-915f-47c5-a536-6070690c674d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6wfdp" [744ea6b3-915f-47c5-a536-6070690c674d] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004718758s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31108
functional_test.go:1674: http://192.168.49.2:31108: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-6wfdp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31108
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3277b38f-b8d3-42b5-8f5e-b540371222ea] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004333897s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-060112 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-060112 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-060112 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-060112 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e83e2e8d-bb89-46f3-8111-e6c63957cc41] Pending
helpers_test.go:344: "sp-pod" [e83e2e8d-bb89-46f3-8111-e6c63957cc41] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e83e2e8d-bb89-46f3-8111-e6c63957cc41] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004154942s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-060112 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-060112 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-060112 delete -f testdata/storage-provisioner/pod.yaml: (1.565811687s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-060112 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [adb5320b-328e-4621-be57-162782b665b6] Pending
helpers_test.go:344: "sp-pod" [adb5320b-328e-4621-be57-162782b665b6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003909745s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-060112 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh -n functional-060112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cp functional-060112:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1419128388/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh -n functional-060112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh -n functional-060112 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1891165/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo cat /etc/test/nested/copy/1891165/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1891165.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo cat /etc/ssl/certs/1891165.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1891165.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo cat /usr/share/ca-certificates/1891165.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/18911652.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo cat /etc/ssl/certs/18911652.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/18911652.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo cat /usr/share/ca-certificates/18911652.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-060112 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 ssh "sudo systemctl is-active docker": exit status 1 (367.425387ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 ssh "sudo systemctl is-active crio": exit status 1 (321.241006ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-060112 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-060112 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-060112 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-060112 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1920228: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-060112 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-060112 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0a44e0ce-7e13-4f10-ba8f-c7bf98a55497] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0a44e0ce-7e13-4f10-ba8f-c7bf98a55497] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003486387s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-060112 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.153.255 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-060112 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-060112 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-060112 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-npt6c" [8ec1fdb0-7294-46f8-9589-2af0336225cd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-npt6c" [8ec1fdb0-7294-46f8-9589-2af0336225cd] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004565144s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "376.694896ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "83.449905ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "446.761284ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "68.864568ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdany-port1225936650/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705374235978820639" to /tmp/TestFunctionalparallelMountCmdany-port1225936650/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705374235978820639" to /tmp/TestFunctionalparallelMountCmdany-port1225936650/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705374235978820639" to /tmp/TestFunctionalparallelMountCmdany-port1225936650/001/test-1705374235978820639
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (536.595013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 03:03 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 03:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 03:03 test-1705374235978820639
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh cat /mount-9p/test-1705374235978820639
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-060112 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2983d519-18fe-4c07-8374-e1ee1908f579] Pending
helpers_test.go:344: "busybox-mount" [2983d519-18fe-4c07-8374-e1ee1908f579] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2983d519-18fe-4c07-8374-e1ee1908f579] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2983d519-18fe-4c07-8374-e1ee1908f579] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004062579s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-060112 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdany-port1225936650/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 service list -o json
functional_test.go:1493: Took "799.864751ms" to run "out/minikube-linux-arm64 -p functional-060112 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32558
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32558
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdspecific-port3401625051/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (613.257102ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdspecific-port3401625051/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 ssh "sudo umount -f /mount-9p": exit status 1 (402.175798ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-060112 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdspecific-port3401625051/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3275572614/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3275572614/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3275572614/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T" /mount1: (1.158781345s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T" /mount2
2024/01/16 03:04:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-060112 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3275572614/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3275572614/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-060112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3275572614/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-060112 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-060112
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-060112 image ls --format short --alsologtostderr:
I0116 03:04:26.624306 1925003 out.go:296] Setting OutFile to fd 1 ...
I0116 03:04:26.624487 1925003 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:26.624498 1925003 out.go:309] Setting ErrFile to fd 2...
I0116 03:04:26.624505 1925003 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:26.624796 1925003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
I0116 03:04:26.625571 1925003 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:26.625731 1925003 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:26.626411 1925003 cli_runner.go:164] Run: docker container inspect functional-060112 --format={{.State.Status}}
I0116 03:04:26.646779 1925003 ssh_runner.go:195] Run: systemctl --version
I0116 03:04:26.647089 1925003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060112
I0116 03:04:26.682516 1925003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35038 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/functional-060112/id_rsa Username:docker}
I0116 03:04:26.783120 1925003 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-060112 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| docker.io/library/nginx                     | latest             | sha256:6c7be4 | 67.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/nginx                     | alpine             | sha256:74077e | 17.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| docker.io/library/minikube-local-cache-test | functional-060112  | sha256:7587c9 | 1.01kB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-060112 image ls --format table --alsologtostderr:
I0116 03:04:26.979670 1925063 out.go:296] Setting OutFile to fd 1 ...
I0116 03:04:26.979846 1925063 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:26.979867 1925063 out.go:309] Setting ErrFile to fd 2...
I0116 03:04:26.979885 1925063 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:26.980206 1925063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
I0116 03:04:26.980872 1925063 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:26.981114 1925063 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:26.981698 1925063 cli_runner.go:164] Run: docker container inspect functional-060112 --format={{.State.Status}}
I0116 03:04:27.007570 1925063 ssh_runner.go:195] Run: systemctl --version
I0116 03:04:27.007627 1925063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060112
I0116 03:04:27.029920 1925063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35038 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/functional-060112/id_rsa Username:docker}
I0116 03:04:27.138864 1925063 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-060112 image ls --format json --alsologtostderr:
[{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"r
epoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":
"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"67219073"},{"id":"sha256:7587c9e9552d78bf3d197665f2379a5e157563acec89a68b4938f704e08f7baa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-060112"],"size":"1006"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDiges
ts":["docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17610338"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a
51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-060112 image ls --format json --alsologtostderr:
I0116 03:04:26.947575 1925058 out.go:296] Setting OutFile to fd 1 ...
I0116 03:04:26.947801 1925058 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:26.947809 1925058 out.go:309] Setting ErrFile to fd 2...
I0116 03:04:26.947815 1925058 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:26.948100 1925058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
I0116 03:04:26.948820 1925058 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:26.949013 1925058 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:26.949597 1925058 cli_runner.go:164] Run: docker container inspect functional-060112 --format={{.State.Status}}
I0116 03:04:26.974681 1925058 ssh_runner.go:195] Run: systemctl --version
I0116 03:04:26.974745 1925058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060112
I0116 03:04:27.005698 1925058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35038 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/functional-060112/id_rsa Username:docker}
I0116 03:04:27.116018 1925058 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-060112 image ls --format yaml --alsologtostderr:
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:7587c9e9552d78bf3d197665f2379a5e157563acec89a68b4938f704e08f7baa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-060112
size: "1006"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "67219073"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "17610338"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-060112 image ls --format yaml --alsologtostderr:
I0116 03:04:26.627110 1925004 out.go:296] Setting OutFile to fd 1 ...
I0116 03:04:26.627357 1925004 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:26.627384 1925004 out.go:309] Setting ErrFile to fd 2...
I0116 03:04:26.627403 1925004 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:26.627767 1925004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
I0116 03:04:26.628515 1925004 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:26.628752 1925004 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:26.629386 1925004 cli_runner.go:164] Run: docker container inspect functional-060112 --format={{.State.Status}}
I0116 03:04:26.652289 1925004 ssh_runner.go:195] Run: systemctl --version
I0116 03:04:26.652345 1925004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060112
I0116 03:04:26.700796 1925004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35038 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/functional-060112/id_rsa Username:docker}
I0116 03:04:26.799840 1925004 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-060112 ssh pgrep buildkitd: exit status 1 (306.582341ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image build -t localhost/my-image:functional-060112 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-060112 image build -t localhost/my-image:functional-060112 testdata/build --alsologtostderr: (2.156849219s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-060112 image build -t localhost/my-image:functional-060112 testdata/build --alsologtostderr:
I0116 03:04:27.560728 1925164 out.go:296] Setting OutFile to fd 1 ...
I0116 03:04:27.562579 1925164 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:27.562621 1925164 out.go:309] Setting ErrFile to fd 2...
I0116 03:04:27.562642 1925164 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:04:27.563174 1925164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
I0116 03:04:27.563920 1925164 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:27.564619 1925164 config.go:182] Loaded profile config "functional-060112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 03:04:27.565293 1925164 cli_runner.go:164] Run: docker container inspect functional-060112 --format={{.State.Status}}
I0116 03:04:27.585782 1925164 ssh_runner.go:195] Run: systemctl --version
I0116 03:04:27.585850 1925164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060112
I0116 03:04:27.607820 1925164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35038 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/functional-060112/id_rsa Username:docker}
I0116 03:04:27.703466 1925164 build_images.go:151] Building image from path: /tmp/build.3712498971.tar
I0116 03:04:27.703547 1925164 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 03:04:27.714215 1925164 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3712498971.tar
I0116 03:04:27.718599 1925164 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3712498971.tar: stat -c "%s %y" /var/lib/minikube/build/build.3712498971.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3712498971.tar': No such file or directory
I0116 03:04:27.718630 1925164 ssh_runner.go:362] scp /tmp/build.3712498971.tar --> /var/lib/minikube/build/build.3712498971.tar (3072 bytes)
I0116 03:04:27.749926 1925164 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3712498971
I0116 03:04:27.760793 1925164 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3712498971 -xf /var/lib/minikube/build/build.3712498971.tar
I0116 03:04:27.772027 1925164 containerd.go:379] Building image: /var/lib/minikube/build/build.3712498971
I0116 03:04:27.772110 1925164 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3712498971 --local dockerfile=/var/lib/minikube/build/build.3712498971 --output type=image,name=localhost/my-image:functional-060112
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c76e25d166ab9940df5d83d8d2394f30b853f55d7116d53fcb6cd160f7e42632 0.0s done
#8 exporting config sha256:ed387d9f5ca7a28cd883da04f891aa1a9c1204a412dca25e8ade140c45ed7431
#8 exporting config sha256:ed387d9f5ca7a28cd883da04f891aa1a9c1204a412dca25e8ade140c45ed7431 0.0s done
#8 naming to localhost/my-image:functional-060112 done
#8 DONE 0.1s
I0116 03:04:29.614005 1925164 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3712498971 --local dockerfile=/var/lib/minikube/build/build.3712498971 --output type=image,name=localhost/my-image:functional-060112: (1.841860587s)
I0116 03:04:29.614108 1925164 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3712498971
I0116 03:04:29.626233 1925164 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3712498971.tar
I0116 03:04:29.636661 1925164 build_images.go:207] Built localhost/my-image:functional-060112 from /tmp/build.3712498971.tar
I0116 03:04:29.636692 1925164 build_images.go:123] succeeded building to: functional-060112
I0116 03:04:29.636697 1925164 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.775978756s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-060112
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image rm gcr.io/google-containers/addon-resizer:functional-060112 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-060112
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-060112 image save --daemon gcr.io/google-containers/addon-resizer:functional-060112 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-060112
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-060112
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-060112
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-060112
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (94.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-846462 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0116 03:04:42.023666 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-846462 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m34.606322753s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (94.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-846462 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-846462 addons enable ingress --alsologtostderr -v=5: (8.523719387s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-846462 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-577172 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0116 03:07:25.863805 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:08:30.340938 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:30.346250 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:30.356579 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:30.376833 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:30.417094 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:30.497506 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:30.657933 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:30.978487 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:31.619347 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:32.899568 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:08:35.461202 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-577172 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m23.200856103s)
--- PASS: TestJSONOutput/start/Command (83.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-577172 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-577172 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-577172 --output=json --user=testUser
E0116 03:08:40.582092 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-577172 --output=json --user=testUser: (5.903142803s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-749934 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-749934 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.828958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e84d7cb2-2f4c-47ce-b13f-0ddf22c4c62f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-749934] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbe15629-1d0c-435f-a7a2-1bc95e9ffaec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17967"}}
	{"specversion":"1.0","id":"d9afa720-541c-42d2-8df1-d97810b76f2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad3d2235-bbdf-4146-bec0-b8a54c7122e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig"}}
	{"specversion":"1.0","id":"646199a3-4d61-43d4-9c2b-39dc65815ba7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube"}}
	{"specversion":"1.0","id":"562090b2-e127-4bb3-95b9-89b07066d20f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c8d59ae0-fb31-4134-b706-a4576a9d6cec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dfa737fb-62c4-45e5-9b5c-8c04a18faf39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-749934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-749934
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-084153 --network=
E0116 03:09:11.303333 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-084153 --network=: (44.497778283s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-084153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-084153
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-084153: (2.149481407s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.67s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-155107 --network=bridge
E0116 03:09:52.263535 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-155107 --network=bridge: (33.042974736s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-155107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-155107
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-155107: (1.990738582s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.06s)

                                                
                                    
x
+
TestKicExistingNetwork (37.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-853746 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-853746 --network=existing-network: (35.470084344s)
helpers_test.go:175: Cleaning up "existing-network-853746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-853746
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-853746: (2.040573613s)
--- PASS: TestKicExistingNetwork (37.68s)

                                                
                                    
x
+
TestKicCustomSubnet (34.19s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-618221 --subnet=192.168.60.0/24
E0116 03:11:14.183743 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:11:16.629429 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:16.634740 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:16.644970 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:16.665198 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:16.705455 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:16.785742 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:16.946120 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:17.266596 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:17.907640 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:19.187962 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:21.749104 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-618221 --subnet=192.168.60.0/24: (32.065352942s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-618221 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-618221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-618221
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-618221: (2.097637237s)
--- PASS: TestKicCustomSubnet (34.19s)

                                                
                                    
x
+
TestKicStaticIP (38.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-078555 --static-ip=192.168.200.200
E0116 03:11:26.869936 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:37.110101 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:57.590346 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:11:58.179867 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-078555 --static-ip=192.168.200.200: (36.090748132s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-078555 ip
helpers_test.go:175: Cleaning up "static-ip-078555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-078555
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-078555: (2.145099962s)
--- PASS: TestKicStaticIP (38.42s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (68.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-748708 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-748708 --driver=docker  --container-runtime=containerd: (29.442551976s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-751301 --driver=docker  --container-runtime=containerd
E0116 03:12:38.550576 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-751301 --driver=docker  --container-runtime=containerd: (33.200051519s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-748708
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-751301
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-751301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-751301
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-751301: (2.021634018s)
helpers_test.go:175: Cleaning up "first-748708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-748708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-748708: (2.333165317s)
--- PASS: TestMinikubeProfile (68.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-676914 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-676914 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.908906591s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-676914 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-678686 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-678686 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.672591148s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-678686 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-676914 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-676914 --alsologtostderr -v=5: (1.656132115s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-678686 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-678686
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-678686: (1.222620738s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-678686
E0116 03:13:30.340445 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-678686: (7.261518711s)
--- PASS: TestMountStart/serial/RestartStopped (8.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-678686 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910504 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0116 03:13:58.023905 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:14:00.471654 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910504 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.651769717s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-910504 -- rollout status deployment/busybox: (2.51211476s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-gkdlf -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-v8xd5 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-gkdlf -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-v8xd5 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-gkdlf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-v8xd5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-gkdlf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-gkdlf -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-v8xd5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910504 -- exec busybox-5bc68d56bd-v8xd5 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-910504 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-910504 -v 3 --alsologtostderr: (16.671868744s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-910504 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp testdata/cp-test.txt multinode-910504:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1802183344/001/cp-test_multinode-910504.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504:/home/docker/cp-test.txt multinode-910504-m02:/home/docker/cp-test_multinode-910504_multinode-910504-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m02 "sudo cat /home/docker/cp-test_multinode-910504_multinode-910504-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504:/home/docker/cp-test.txt multinode-910504-m03:/home/docker/cp-test_multinode-910504_multinode-910504-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m03 "sudo cat /home/docker/cp-test_multinode-910504_multinode-910504-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp testdata/cp-test.txt multinode-910504-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1802183344/001/cp-test_multinode-910504-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504-m02:/home/docker/cp-test.txt multinode-910504:/home/docker/cp-test_multinode-910504-m02_multinode-910504.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504 "sudo cat /home/docker/cp-test_multinode-910504-m02_multinode-910504.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504-m02:/home/docker/cp-test.txt multinode-910504-m03:/home/docker/cp-test_multinode-910504-m02_multinode-910504-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m03 "sudo cat /home/docker/cp-test_multinode-910504-m02_multinode-910504-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp testdata/cp-test.txt multinode-910504-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1802183344/001/cp-test_multinode-910504-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504-m03:/home/docker/cp-test.txt multinode-910504:/home/docker/cp-test_multinode-910504-m03_multinode-910504.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504 "sudo cat /home/docker/cp-test_multinode-910504-m03_multinode-910504.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 cp multinode-910504-m03:/home/docker/cp-test.txt multinode-910504-m02:/home/docker/cp-test_multinode-910504-m03_multinode-910504-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 ssh -n multinode-910504-m02 "sudo cat /home/docker/cp-test_multinode-910504-m03_multinode-910504-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-910504 node stop m03: (1.249163122s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910504 status: exit status 7 (572.977989ms)

                                                
                                                
-- stdout --
	multinode-910504
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910504-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910504-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910504 status --alsologtostderr: exit status 7 (547.052882ms)

                                                
                                                
-- stdout --
	multinode-910504
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910504-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910504-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:15:32.756873 1972430 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:15:32.757028 1972430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:15:32.757037 1972430 out.go:309] Setting ErrFile to fd 2...
	I0116 03:15:32.757043 1972430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:15:32.757387 1972430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 03:15:32.758155 1972430 out.go:303] Setting JSON to false
	I0116 03:15:32.758217 1972430 mustload.go:65] Loading cluster: multinode-910504
	I0116 03:15:32.759442 1972430 notify.go:220] Checking for updates...
	I0116 03:15:32.759430 1972430 config.go:182] Loaded profile config "multinode-910504": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:15:32.759506 1972430 status.go:255] checking status of multinode-910504 ...
	I0116 03:15:32.760629 1972430 cli_runner.go:164] Run: docker container inspect multinode-910504 --format={{.State.Status}}
	I0116 03:15:32.780509 1972430 status.go:330] multinode-910504 host status = "Running" (err=<nil>)
	I0116 03:15:32.780529 1972430 host.go:66] Checking if "multinode-910504" exists ...
	I0116 03:15:32.780898 1972430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-910504
	I0116 03:15:32.798745 1972430 host.go:66] Checking if "multinode-910504" exists ...
	I0116 03:15:32.799095 1972430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:15:32.799147 1972430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-910504
	I0116 03:15:32.821081 1972430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35103 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/multinode-910504/id_rsa Username:docker}
	I0116 03:15:32.915711 1972430 ssh_runner.go:195] Run: systemctl --version
	I0116 03:15:32.921045 1972430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:15:32.934763 1972430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:15:33.003940 1972430 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:65 SystemTime:2024-01-16 03:15:32.992932185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:15:33.004672 1972430 kubeconfig.go:92] found "multinode-910504" server: "https://192.168.58.2:8443"
	I0116 03:15:33.004697 1972430 api_server.go:166] Checking apiserver status ...
	I0116 03:15:33.004744 1972430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:15:33.018560 1972430 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1331/cgroup
	I0116 03:15:33.030525 1972430 api_server.go:182] apiserver freezer: "6:freezer:/docker/7ced81d6ae6b7899f77487f99a68de765a7b9a28617163421924816f72d4319d/kubepods/burstable/podc6bd5af04cfb23d7079d05d949d538c2/7415249f1d50b37aa6415ccca907194ad74a9852f286edb5d2ab6b50a96c1766"
	I0116 03:15:33.030645 1972430 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7ced81d6ae6b7899f77487f99a68de765a7b9a28617163421924816f72d4319d/kubepods/burstable/podc6bd5af04cfb23d7079d05d949d538c2/7415249f1d50b37aa6415ccca907194ad74a9852f286edb5d2ab6b50a96c1766/freezer.state
	I0116 03:15:33.042847 1972430 api_server.go:204] freezer state: "THAWED"
	I0116 03:15:33.042879 1972430 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0116 03:15:33.052043 1972430 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0116 03:15:33.052118 1972430 status.go:421] multinode-910504 apiserver status = Running (err=<nil>)
	I0116 03:15:33.052143 1972430 status.go:257] multinode-910504 status: &{Name:multinode-910504 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:15:33.052188 1972430 status.go:255] checking status of multinode-910504-m02 ...
	I0116 03:15:33.052536 1972430 cli_runner.go:164] Run: docker container inspect multinode-910504-m02 --format={{.State.Status}}
	I0116 03:15:33.070078 1972430 status.go:330] multinode-910504-m02 host status = "Running" (err=<nil>)
	I0116 03:15:33.070101 1972430 host.go:66] Checking if "multinode-910504-m02" exists ...
	I0116 03:15:33.070398 1972430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-910504-m02
	I0116 03:15:33.088222 1972430 host.go:66] Checking if "multinode-910504-m02" exists ...
	I0116 03:15:33.088532 1972430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:15:33.088582 1972430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-910504-m02
	I0116 03:15:33.106652 1972430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35108 SSHKeyPath:/home/jenkins/minikube-integration/17967-1885793/.minikube/machines/multinode-910504-m02/id_rsa Username:docker}
	I0116 03:15:33.199904 1972430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:15:33.213125 1972430 status.go:257] multinode-910504-m02 status: &{Name:multinode-910504-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:15:33.213160 1972430 status.go:255] checking status of multinode-910504-m03 ...
	I0116 03:15:33.213626 1972430 cli_runner.go:164] Run: docker container inspect multinode-910504-m03 --format={{.State.Status}}
	I0116 03:15:33.231736 1972430 status.go:330] multinode-910504-m03 host status = "Stopped" (err=<nil>)
	I0116 03:15:33.231761 1972430 status.go:343] host is not running, skipping remaining checks
	I0116 03:15:33.231768 1972430 status.go:257] multinode-910504-m03 status: &{Name:multinode-910504-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-910504 node start m03 --alsologtostderr: (11.031483077s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910504
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-910504
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-910504: (25.11483359s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910504 --wait=true -v=8 --alsologtostderr
E0116 03:16:16.629828 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:16:44.312826 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:16:58.180519 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910504 --wait=true -v=8 --alsologtostderr: (1m39.312649917s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910504
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-910504 node delete m03: (4.402685159s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-910504 stop: (23.968667775s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910504 status: exit status 7 (115.346325ms)

                                                
                                                
-- stdout --
	multinode-910504
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910504-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910504 status --alsologtostderr: exit status 7 (102.979303ms)

                                                
                                                
-- stdout --
	multinode-910504
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910504-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:18:19.044575 1981198 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:18:19.044829 1981198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:18:19.044860 1981198 out.go:309] Setting ErrFile to fd 2...
	I0116 03:18:19.044881 1981198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:18:19.045160 1981198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 03:18:19.045370 1981198 out.go:303] Setting JSON to false
	I0116 03:18:19.045526 1981198 mustload.go:65] Loading cluster: multinode-910504
	I0116 03:18:19.045608 1981198 notify.go:220] Checking for updates...
	I0116 03:18:19.046016 1981198 config.go:182] Loaded profile config "multinode-910504": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:18:19.046050 1981198 status.go:255] checking status of multinode-910504 ...
	I0116 03:18:19.046694 1981198 cli_runner.go:164] Run: docker container inspect multinode-910504 --format={{.State.Status}}
	I0116 03:18:19.066386 1981198 status.go:330] multinode-910504 host status = "Stopped" (err=<nil>)
	I0116 03:18:19.066410 1981198 status.go:343] host is not running, skipping remaining checks
	I0116 03:18:19.066418 1981198 status.go:257] multinode-910504 status: &{Name:multinode-910504 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:18:19.066455 1981198 status.go:255] checking status of multinode-910504-m02 ...
	I0116 03:18:19.066769 1981198 cli_runner.go:164] Run: docker container inspect multinode-910504-m02 --format={{.State.Status}}
	I0116 03:18:19.083922 1981198 status.go:330] multinode-910504-m02 host status = "Stopped" (err=<nil>)
	I0116 03:18:19.083942 1981198 status.go:343] host is not running, skipping remaining checks
	I0116 03:18:19.083949 1981198 status.go:257] multinode-910504-m02 status: &{Name:multinode-910504-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910504 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0116 03:18:21.224062 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:18:30.340983 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910504 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m27.318302507s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910504 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910504
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910504-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-910504-m02 --driver=docker  --container-runtime=containerd: exit status 14 (105.688518ms)

                                                
                                                
-- stdout --
	* [multinode-910504-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-910504-m02' is duplicated with machine name 'multinode-910504-m02' in profile 'multinode-910504'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910504-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910504-m03 --driver=docker  --container-runtime=containerd: (35.376623639s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-910504
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-910504: exit status 80 (340.519691ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-910504
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-910504-m03 already exists in multinode-910504-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-910504-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-910504-m03: (2.072534058s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.97s)

                                                
                                    
x
+
TestPreload (160.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-528040 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0116 03:21:16.629425 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-528040 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m14.502002409s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-528040 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-528040 image pull gcr.io/k8s-minikube/busybox: (1.500937156s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-528040
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-528040: (12.036469747s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-528040 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0116 03:21:58.180359 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-528040 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m10.308241552s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-528040 image list
helpers_test.go:175: Cleaning up "test-preload-528040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-528040
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-528040: (2.338900573s)
--- PASS: TestPreload (160.95s)

                                                
                                    
x
+
TestScheduledStopUnix (106.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-932947 --memory=2048 --driver=docker  --container-runtime=containerd
E0116 03:23:30.341011 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-932947 --memory=2048 --driver=docker  --container-runtime=containerd: (30.090932658s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-932947 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-932947 -n scheduled-stop-932947
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-932947 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-932947 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-932947 -n scheduled-stop-932947
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-932947
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-932947 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-932947
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-932947: exit status 7 (94.122966ms)

                                                
                                                
-- stdout --
	scheduled-stop-932947
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-932947 -n scheduled-stop-932947
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-932947 -n scheduled-stop-932947: exit status 7 (92.275881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-932947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-932947
E0116 03:24:53.384121 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-932947: (4.49858683s)
--- PASS: TestScheduledStopUnix (106.40s)

                                                
                                    
x
+
TestInsufficientStorage (12.89s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-087564 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-087564 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.298650768s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bc12c2e9-8c45-429b-a44c-37bac32681d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-087564] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9cde5d99-fb25-4893-a550-e1cf51d161dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17967"}}
	{"specversion":"1.0","id":"377f541a-36ce-4070-90b2-312d2a67e2d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ce6f46a0-3049-4d85-9af5-6f982fac99c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig"}}
	{"specversion":"1.0","id":"197350b4-8c3c-4e24-93ae-929d4db05493","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube"}}
	{"specversion":"1.0","id":"70e989e8-d16a-4b0a-9164-48db1255240e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"86349eb9-7240-4c8b-b1cf-bfd3620ad0e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c842ff7f-1220-4dc3-b8f5-e0cbd0cab6c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ee0e81b4-d026-453f-b571-bba2208ca15a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bb56da34-19d7-46b4-95ba-e8ff7894a325","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"51278f4e-efcd-4f97-846d-37978117e69e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"17bb09ec-0ee1-4221-83a2-28482a10cf36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-087564 in cluster insufficient-storage-087564","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"909a9265-5a21-4a06-a3f8-536ce4ee8266","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"be6d6417-604e-43cc-a95d-a9c350214dba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ccb9401-a20c-4aa7-bff0-eed937d12f99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-087564 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-087564 --output=json --layout=cluster: exit status 7 (330.1114ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-087564","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-087564","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:25:07.253334 1998546 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-087564" does not appear in /home/jenkins/minikube-integration/17967-1885793/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-087564 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-087564 --output=json --layout=cluster: exit status 7 (313.137822ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-087564","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-087564","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:25:07.568140 1998598 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-087564" does not appear in /home/jenkins/minikube-integration/17967-1885793/kubeconfig
	E0116 03:25:07.580149 1998598 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/insufficient-storage-087564/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-087564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-087564
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-087564: (1.949692891s)
--- PASS: TestInsufficientStorage (12.89s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2396232927 start -p running-upgrade-534510 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2396232927 start -p running-upgrade-534510 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.628538969s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-534510 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-534510 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.299317998s)
helpers_test.go:175: Cleaning up "running-upgrade-534510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-534510
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-534510: (2.890728941s)
--- PASS: TestRunningBinaryUpgrade (81.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (377.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-500919 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-500919 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.846130555s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-500919
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-500919: (2.61932221s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-500919 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-500919 status --format={{.Host}}: exit status 7 (148.655993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-500919 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0116 03:27:39.673325 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-500919 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m40.835062646s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-500919 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-500919 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-500919 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (114.536686ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-500919] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-500919
	    minikube start -p kubernetes-upgrade-500919 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5009192 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-500919 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-500919 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-500919 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.696921343s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-500919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-500919
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-500919: (2.518409728s)
--- PASS: TestKubernetesUpgrade (377.93s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1069651340 start -p missing-upgrade-564892 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1069651340 start -p missing-upgrade-564892 --memory=2200 --driver=docker  --container-runtime=containerd: (1m28.640800486s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-564892
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-564892: (13.194757653s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-564892
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-564892 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0116 03:26:58.180685 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-564892 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.408724769s)
helpers_test.go:175: Cleaning up "missing-upgrade-564892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-564892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-564892: (2.459547151s)
--- PASS: TestMissingContainerUpgrade (171.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-963609 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-963609 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (90.471772ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-963609] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-963609 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-963609 --driver=docker  --container-runtime=containerd: (42.526243013s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-963609 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-963609 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-963609 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.796712109s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-963609 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-963609 status -o json: exit status 2 (333.33242ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-963609","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-963609
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-963609: (1.945102625s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-963609 --no-kubernetes --driver=docker  --container-runtime=containerd
E0116 03:26:16.629881 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-963609 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.363120777s)
--- PASS: TestNoKubernetes/serial/Start (7.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-963609 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-963609 "sudo systemctl is-active --quiet service kubelet": exit status 1 (389.363176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-963609
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-963609: (1.341104298s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-963609 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-963609 --driver=docker  --container-runtime=containerd: (7.983086113s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-963609 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-963609 "sudo systemctl is-active --quiet service kubelet": exit status 1 (501.101212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3048862529 start -p stopped-upgrade-383400 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0116 03:28:30.340574 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3048862529 start -p stopped-upgrade-383400 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.036524883s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3048862529 -p stopped-upgrade-383400 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3048862529 -p stopped-upgrade-383400 stop: (19.951501641s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-383400 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-383400 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.202757866s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-383400
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-383400: (1.299277896s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/Start (83.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-533278 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0116 03:31:16.628994 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:31:58.179869 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-533278 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.79482698s)
--- PASS: TestPause/serial/Start (83.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-533278 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-533278 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.274615625s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.31s)

                                                
                                    
x
+
TestPause/serial/Pause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-533278 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-533278 --alsologtostderr -v=5: (1.068381143s)
--- PASS: TestPause/serial/Pause (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-533278 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-533278 --output=json --layout=cluster: exit status 2 (472.214566ms)

                                                
                                                
-- stdout --
	{"Name":"pause-533278","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-533278","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (1s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-533278 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (1.00s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.21s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-533278 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-533278 --alsologtostderr -v=5: (1.207987345s)
--- PASS: TestPause/serial/PauseAgain (1.21s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.55s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-533278 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-533278 --alsologtostderr -v=5: (3.548197193s)
--- PASS: TestPause/serial/DeletePaused (3.55s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-533278
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-533278: exit status 1 (20.891718ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-533278: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-436925 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-436925 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (290.024807ms)

                                                
                                                
-- stdout --
	* [false-436925] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:32:58.169054 2035805 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:32:58.169172 2035805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:32:58.169181 2035805 out.go:309] Setting ErrFile to fd 2...
	I0116 03:32:58.169188 2035805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:32:58.169723 2035805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-1885793/.minikube/bin
	I0116 03:32:58.170181 2035805 out.go:303] Setting JSON to false
	I0116 03:32:58.171129 2035805 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":36915,"bootTime":1705339064,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0116 03:32:58.171199 2035805 start.go:138] virtualization:  
	I0116 03:32:58.174461 2035805 out.go:177] * [false-436925] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:32:58.177059 2035805 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:32:58.177230 2035805 notify.go:220] Checking for updates...
	I0116 03:32:58.181767 2035805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:32:58.184439 2035805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-1885793/kubeconfig
	I0116 03:32:58.186768 2035805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-1885793/.minikube
	I0116 03:32:58.190196 2035805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:32:58.192444 2035805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:32:58.195067 2035805 config.go:182] Loaded profile config "force-systemd-flag-470141": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 03:32:58.195240 2035805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:32:58.219837 2035805 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:32:58.219964 2035805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:32:58.338963 2035805 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-16 03:32:58.323581422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:32:58.339067 2035805 docker.go:295] overlay module found
	I0116 03:32:58.341412 2035805 out.go:177] * Using the docker driver based on user configuration
	I0116 03:32:58.343846 2035805 start.go:298] selected driver: docker
	I0116 03:32:58.343867 2035805 start.go:902] validating driver "docker" against <nil>
	I0116 03:32:58.343881 2035805 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:32:58.346652 2035805 out.go:177] 
	W0116 03:32:58.348846 2035805 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0116 03:32:58.351225 2035805 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-436925 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-436925" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-436925

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436925"

                                                
                                                
----------------------- debugLogs end: false-436925 [took: 4.918081666s] --------------------------------
helpers_test.go:175: Cleaning up "false-436925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-436925
--- PASS: TestNetworkPlugins/group/false (5.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-507272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0116 03:35:01.225075 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:36:16.629670 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-507272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m54.701547327s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-507272 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8701953e-5117-4c3d-b9f5-d4e3a2f6179c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8701953e-5117-4c3d-b9f5-d4e3a2f6179c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003494068s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-507272 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-507272 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-507272 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04541409s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-507272 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-507272 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-507272 --alsologtostderr -v=3: (12.107214225s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-507272 -n old-k8s-version-507272
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-507272 -n old-k8s-version-507272: exit status 7 (100.605632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-507272 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (661.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-507272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0116 03:36:58.180592 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-507272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m1.28213181s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-507272 -n old-k8s-version-507272
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (661.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-084008 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-084008 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m9.800179077s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-084008 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b97dd872-5c24-4bcf-b35a-fc62d4deaff6] Pending
helpers_test.go:344: "busybox" [b97dd872-5c24-4bcf-b35a-fc62d4deaff6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b97dd872-5c24-4bcf-b35a-fc62d4deaff6] Running
E0116 03:38:30.340372 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003961276s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-084008 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-084008 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-084008 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069471967s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-084008 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-084008 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-084008 --alsologtostderr -v=3: (12.184801341s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-084008 -n no-preload-084008
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-084008 -n no-preload-084008: exit status 7 (90.090189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-084008 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (340.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-084008 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 03:41:16.629920 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:41:33.384329 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:41:58.180761 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:43:30.341097 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:44:19.673562 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-084008 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m40.322065803s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-084008 -n no-preload-084008
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (340.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xf7db" [bb07b1de-0d6b-43ef-a4a1-d540b7dfc6e5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xf7db" [bb07b1de-0d6b-43ef-a4a1-d540b7dfc6e5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004027374s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xf7db" [bb07b1de-0d6b-43ef-a4a1-d540b7dfc6e5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003985105s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-084008 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-084008 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-084008 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-084008 -n no-preload-084008
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-084008 -n no-preload-084008: exit status 2 (358.696488ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-084008 -n no-preload-084008
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-084008 -n no-preload-084008: exit status 2 (358.933232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-084008 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-084008 -n no-preload-084008
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-084008 -n no-preload-084008
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-924445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0116 03:46:16.629854 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-924445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m35.924433102s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-924445 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c02f1e52-e043-484c-8faa-3321b101b6bd] Pending
helpers_test.go:344: "busybox" [c02f1e52-e043-484c-8faa-3321b101b6bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c02f1e52-e043-484c-8faa-3321b101b6bd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00446198s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-924445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-924445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-924445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.082447511s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-924445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-924445 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-924445 --alsologtostderr -v=3: (12.162866403s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-924445 -n embed-certs-924445
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-924445 -n embed-certs-924445: exit status 7 (94.487627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-924445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-924445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0116 03:46:58.180412 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-924445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m37.029784799s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-924445 -n embed-certs-924445
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-4nm75" [5e9abc4f-4aff-4b5d-a450-854550fe53ad] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004011882s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-4nm75" [5e9abc4f-4aff-4b5d-a450-854550fe53ad] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004210866s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-507272 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-507272 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-507272 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-507272 -n old-k8s-version-507272
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-507272 -n old-k8s-version-507272: exit status 2 (374.282086ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-507272 -n old-k8s-version-507272
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-507272 -n old-k8s-version-507272: exit status 2 (395.456737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-507272 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-507272 -n old-k8s-version-507272
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-507272 -n old-k8s-version-507272
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-004888 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0116 03:48:23.962948 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:23.968244 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:23.978445 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:23.998720 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:24.039191 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:24.119954 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:24.280409 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:24.601292 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:25.242199 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:26.522475 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:29.082676 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:30.340421 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
E0116 03:48:34.203449 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:48:44.443665 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:49:04.924156 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-004888 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m19.02744119s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-004888 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2e587563-083d-48f3-be4f-e7a34319a192] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2e587563-083d-48f3-be4f-e7a34319a192] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004088947s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-004888 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-004888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-004888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.177778644s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-004888 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-004888 --alsologtostderr -v=3
E0116 03:49:45.884493 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-004888 --alsologtostderr -v=3: (12.139861057s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888: exit status 7 (92.526236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-004888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-004888 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0116 03:51:07.804642 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:51:16.629665 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
E0116 03:51:31.916354 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:31.921607 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:31.931922 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:31.952142 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:31.992380 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:32.073361 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:32.233770 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:32.554712 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:33.195094 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:34.475267 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:37.035470 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:41.225309 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:51:42.155957 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:52.397009 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:51:58.180594 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:52:12.877487 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-004888 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m43.820836883s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mrtqr" [35269ada-5724-4762-9771-046122cfec53] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mrtqr" [35269ada-5724-4762-9771-046122cfec53] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.003721554s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mrtqr" [35269ada-5724-4762-9771-046122cfec53] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00424022s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-924445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-924445 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-924445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-924445 -n embed-certs-924445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-924445 -n embed-certs-924445: exit status 2 (379.021528ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-924445 -n embed-certs-924445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-924445 -n embed-certs-924445: exit status 2 (371.851995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-924445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-924445 -n embed-certs-924445
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-924445 -n embed-certs-924445
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-716494 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 03:52:53.838308 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:53:23.962637 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
E0116 03:53:30.340421 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-716494 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (48.685823282s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-716494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-716494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.091496936s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-716494 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-716494 --alsologtostderr -v=3: (1.286374198s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-716494 -n newest-cni-716494
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-716494 -n newest-cni-716494: exit status 7 (85.455206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-716494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-716494 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 03:53:51.645178 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-716494 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (31.02651029s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-716494 -n newest-cni-716494
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-716494 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-716494 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-716494 -n newest-cni-716494
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-716494 -n newest-cni-716494: exit status 2 (364.030472ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-716494 -n newest-cni-716494
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-716494 -n newest-cni-716494: exit status 2 (386.20462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-716494 --alsologtostderr -v=1
E0116 03:54:15.758841 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-716494 -n newest-cni-716494
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-716494 -n newest-cni-716494
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m26.566422124s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j9hnm" [4d223d07-38df-40d1-b6f8-ba903b7138d7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j9hnm" [4d223d07-38df-40d1-b6f8-ba903b7138d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004425498s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-436925 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-436925 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n4mgk" [5beb70e6-c13f-4fab-ac04-7cf0331c68ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n4mgk" [5beb70e6-c13f-4fab-ac04-7cf0331c68ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004017488s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j9hnm" [4d223d07-38df-40d1-b6f8-ba903b7138d7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004197592s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-004888 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-436925 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-004888 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-004888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888: exit status 2 (370.680582ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888: exit status 2 (379.127363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-004888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-004888 -n default-k8s-diff-port-004888
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)
E0116 04:01:58.180658 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0116 03:56:16.629580 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m33.995784417s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0116 03:56:31.916746 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
E0116 03:56:58.180535 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/addons-843965/client.crt: no such file or directory
E0116 03:56:59.599928 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/old-k8s-version-507272/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m20.545701265s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-j54lr" [f54b6449-a444-4e45-8d28-57b688da7caf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005464357s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4hh9m" [0b417481-15b4-412e-ae91-5a9533fd4e84] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005620334s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-436925 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-436925 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xmdhr" [bfbfe128-1677-4d11-9fcf-8c5df9bf2c7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xmdhr" [bfbfe128-1677-4d11-9fcf-8c5df9bf2c7c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004639137s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-436925 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-436925 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bl5vn" [0693d425-25b8-4ca1-9c67-e7213669af5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bl5vn" [0693d425-25b8-4ca1-9c67-e7213669af5f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004199137s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-436925 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-436925 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0116 03:58:23.962614 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/no-preload-084008/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.895881711s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0116 03:58:30.340488 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/functional-060112/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m28.673760019s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-436925 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-436925 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7z6rr" [fd6b2eb5-3755-4fef-946f-6ab77f7ad3a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7z6rr" [fd6b2eb5-3755-4fef-946f-6ab77f7ad3a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004298627s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-436925 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0116 03:59:54.027514 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/default-k8s-diff-port-004888/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.747652397s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-436925 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-436925 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v6mkq" [8269fc40-9bad-4e90-8099-b266c9535d2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v6mkq" [8269fc40-9bad-4e90-8099-b266c9535d2c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004476626s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-436925 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0116 04:00:47.001779 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:47.007231 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:47.017549 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:47.037818 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:47.078781 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:47.159755 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:47.320662 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:47.641086 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:48.281455 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:49.561714 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:52.122629 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:55.467880 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/default-k8s-diff-port-004888/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-436925 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m27.655969232s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f5gd9" [4a7e00c6-177b-4301-bd68-8f01c2b0fd63] Running
E0116 04:00:57.242946 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
E0116 04:00:59.674673 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/ingress-addon-legacy-846462/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004361585s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-436925 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-436925 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6nwn6" [1cb93913-5387-4f10-9d16-28dabfb2842d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6nwn6" [1cb93913-5387-4f10-9d16-28dabfb2842d] Running
E0116 04:01:07.484100 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004173858s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-436925 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-436925 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-436925 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fncbv" [3a5167a1-0b8d-48ec-98ad-49e33def7b61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fncbv" [3a5167a1-0b8d-48ec-98ad-49e33def7b61] Running
E0116 04:02:08.926121 1891165 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-1885793/.minikube/profiles/auto-436925/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.0033554s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-436925 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-436925 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (31/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-734822 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-734822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-734822
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-562262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-562262
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-436925 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-436925" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-436925

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436925"

                                                
                                                
----------------------- debugLogs end: kubenet-436925 [took: 5.548669011s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-436925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-436925
--- SKIP: TestNetworkPlugins/group/kubenet (5.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-436925 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-436925" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-436925

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-436925" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436925"

                                                
                                                
----------------------- debugLogs end: cilium-436925 [took: 5.225563818s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-436925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-436925
--- SKIP: TestNetworkPlugins/group/cilium (5.43s)

                                                
                                    
Copied to clipboard