Test Report: Docker_Linux_containerd_arm64 18166

                    
                      6ca5695ca596b9e8847d9a56309d03e2dd51a205:2024-02-14:33132
                    
                

Test fail (10/320)

x
+
TestAddons/parallel/Ingress (35.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-107916 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-107916 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-107916 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d5b11803-9a63-486d-af2f-4921a92290c2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d5b11803-9a63-486d-af2f-4921a92290c2] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.005690213s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-107916 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.070721509s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-107916 addons disable ingress --alsologtostderr -v=1: (7.89103558s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-107916
helpers_test.go:235: (dbg) docker inspect addons-107916:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2",
	        "Created": "2024-02-14T02:55:16.420595551Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1136363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T02:55:16.696144744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2/hosts",
	        "LogPath": "/var/lib/docker/containers/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2-json.log",
	        "Name": "/addons-107916",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-107916:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-107916",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ad15bcfab78f1b42a106505f16e91b05f3ec6d12b5d6ee964cebb0825f950870-init/diff:/var/lib/docker/overlay2/2b57dacbb0185892ad2774651ca7e304a0e7ce49c55385fdb5828fd98438b35e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad15bcfab78f1b42a106505f16e91b05f3ec6d12b5d6ee964cebb0825f950870/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad15bcfab78f1b42a106505f16e91b05f3ec6d12b5d6ee964cebb0825f950870/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad15bcfab78f1b42a106505f16e91b05f3ec6d12b5d6ee964cebb0825f950870/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-107916",
	                "Source": "/var/lib/docker/volumes/addons-107916/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-107916",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-107916",
	                "name.minikube.sigs.k8s.io": "addons-107916",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24617fdc7846c69b15f7b14765c3111c326c83c35829afe4fa68f0759e916cae",
	            "SandboxKey": "/var/run/docker/netns/24617fdc7846",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34032"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34031"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34028"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34030"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34029"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-107916": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b44787e49875",
	                        "addons-107916"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "b93ddc45641f03c8b48df5c33691deb87ba7dfc5305e220447487c34fae09735",
	                    "EndpointID": "667b147d1515ed1fa71c2f6a12447183751d391b179724dba1c54edc004d361c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-107916",
	                        "b44787e49875"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-107916 -n addons-107916
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-107916 logs -n 25: (1.537401656s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-630494                                                                     | download-only-630494   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| delete  | -p download-only-950365                                                                     | download-only-950365   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| delete  | -p download-only-695284                                                                     | download-only-695284   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| start   | --download-only -p                                                                          | download-docker-935155 | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | download-docker-935155                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-935155                                                                   | download-docker-935155 | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-348755   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | binary-mirror-348755                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39189                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-348755                                                                     | binary-mirror-348755   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | addons-107916                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | addons-107916                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-107916 --wait=true                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | -p addons-107916                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-107916 ip                                                                            | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	| addons  | addons-107916 addons disable                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | -p addons-107916                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-107916 ssh cat                                                                       | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | /opt/local-path-provisioner/pvc-2358e9d1-a1ee-49c0-8dab-57be5f72d3ad_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-107916 addons disable                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | addons-107916                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | addons-107916                                                                               |                        |         |         |                     |                     |
	| addons  | addons-107916 addons                                                                        | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-107916 ssh curl -s                                                                   | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-107916 ip                                                                            | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| addons  | addons-107916 addons disable                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-107916 addons                                                                        | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-107916 addons disable                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-107916 addons                                                                        | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:54:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:54:52.680592 1135902 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:54:52.681297 1135902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:54:52.681343 1135902 out.go:304] Setting ErrFile to fd 2...
	I0214 02:54:52.681370 1135902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:54:52.681697 1135902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 02:54:52.682231 1135902 out.go:298] Setting JSON to false
	I0214 02:54:52.683134 1135902 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20239,"bootTime":1707859054,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 02:54:52.683248 1135902 start.go:138] virtualization:  
	I0214 02:54:52.685872 1135902 out.go:177] * [addons-107916] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:54:52.688169 1135902 out.go:177]   - MINIKUBE_LOCATION=18166
	I0214 02:54:52.690131 1135902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:54:52.688307 1135902 notify.go:220] Checking for updates...
	I0214 02:54:52.692326 1135902 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 02:54:52.694278 1135902 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 02:54:52.696507 1135902 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 02:54:52.698042 1135902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 02:54:52.699926 1135902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:54:52.719414 1135902 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:54:52.719598 1135902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:54:52.787447 1135902 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:54:52.778072093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:54:52.787574 1135902 docker.go:295] overlay module found
	I0214 02:54:52.790777 1135902 out.go:177] * Using the docker driver based on user configuration
	I0214 02:54:52.792781 1135902 start.go:298] selected driver: docker
	I0214 02:54:52.792797 1135902 start.go:902] validating driver "docker" against <nil>
	I0214 02:54:52.792809 1135902 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 02:54:52.793446 1135902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:54:52.846117 1135902 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:54:52.837653977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:54:52.846300 1135902 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:54:52.846527 1135902 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 02:54:52.848466 1135902 out.go:177] * Using Docker driver with root privileges
	I0214 02:54:52.850364 1135902 cni.go:84] Creating CNI manager for ""
	I0214 02:54:52.850384 1135902 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:54:52.850395 1135902 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 02:54:52.850406 1135902 start_flags.go:321] config:
	{Name:addons-107916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-107916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:54:52.852586 1135902 out.go:177] * Starting control plane node addons-107916 in cluster addons-107916
	I0214 02:54:52.854710 1135902 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0214 02:54:52.856463 1135902 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:54:52.858280 1135902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:54:52.858340 1135902 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0214 02:54:52.858353 1135902 cache.go:56] Caching tarball of preloaded images
	I0214 02:54:52.858381 1135902 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:54:52.858442 1135902 preload.go:174] Found /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0214 02:54:52.858453 1135902 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0214 02:54:52.858816 1135902 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/config.json ...
	I0214 02:54:52.858849 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/config.json: {Name:mk274e10426dd26b4871c717ee700cbff5881a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:54:52.872907 1135902 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:54:52.873020 1135902 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:54:52.873042 1135902 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 02:54:52.873050 1135902 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 02:54:52.873058 1135902 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 02:54:52.873067 1135902 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0214 02:55:08.980979 1135902 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0214 02:55:08.981018 1135902 cache.go:194] Successfully downloaded all kic artifacts
	I0214 02:55:08.981072 1135902 start.go:365] acquiring machines lock for addons-107916: {Name:mk6b22d499aa6f5c49dd6b9052c82033de2a5e67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 02:55:08.981637 1135902 start.go:369] acquired machines lock for "addons-107916" in 543.518µs
	I0214 02:55:08.981684 1135902 start.go:93] Provisioning new machine with config: &{Name:addons-107916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-107916 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0214 02:55:08.981765 1135902 start.go:125] createHost starting for "" (driver="docker")
	I0214 02:55:08.984089 1135902 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0214 02:55:08.984348 1135902 start.go:159] libmachine.API.Create for "addons-107916" (driver="docker")
	I0214 02:55:08.984385 1135902 client.go:168] LocalClient.Create starting
	I0214 02:55:08.984507 1135902 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem
	I0214 02:55:09.455745 1135902 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem
	I0214 02:55:09.780833 1135902 cli_runner.go:164] Run: docker network inspect addons-107916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 02:55:09.795058 1135902 cli_runner.go:211] docker network inspect addons-107916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 02:55:09.795151 1135902 network_create.go:281] running [docker network inspect addons-107916] to gather additional debugging logs...
	I0214 02:55:09.795174 1135902 cli_runner.go:164] Run: docker network inspect addons-107916
	W0214 02:55:09.812598 1135902 cli_runner.go:211] docker network inspect addons-107916 returned with exit code 1
	I0214 02:55:09.812632 1135902 network_create.go:284] error running [docker network inspect addons-107916]: docker network inspect addons-107916: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-107916 not found
	I0214 02:55:09.812645 1135902 network_create.go:286] output of [docker network inspect addons-107916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-107916 not found
	
	** /stderr **
	I0214 02:55:09.812761 1135902 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 02:55:09.827555 1135902 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025b6b00}
	I0214 02:55:09.827596 1135902 network_create.go:124] attempt to create docker network addons-107916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0214 02:55:09.827656 1135902 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-107916 addons-107916
	I0214 02:55:09.891730 1135902 network_create.go:108] docker network addons-107916 192.168.49.0/24 created
	I0214 02:55:09.891764 1135902 kic.go:121] calculated static IP "192.168.49.2" for the "addons-107916" container
	I0214 02:55:09.891837 1135902 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 02:55:09.906374 1135902 cli_runner.go:164] Run: docker volume create addons-107916 --label name.minikube.sigs.k8s.io=addons-107916 --label created_by.minikube.sigs.k8s.io=true
	I0214 02:55:09.922178 1135902 oci.go:103] Successfully created a docker volume addons-107916
	I0214 02:55:09.922265 1135902 cli_runner.go:164] Run: docker run --rm --name addons-107916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-107916 --entrypoint /usr/bin/test -v addons-107916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0214 02:55:12.071570 1135902 cli_runner.go:217] Completed: docker run --rm --name addons-107916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-107916 --entrypoint /usr/bin/test -v addons-107916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (2.149254273s)
	I0214 02:55:12.071606 1135902 oci.go:107] Successfully prepared a docker volume addons-107916
	I0214 02:55:12.071644 1135902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:55:12.071667 1135902 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 02:55:12.071759 1135902 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-107916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 02:55:16.349160 1135902 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-107916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.277352862s)
	I0214 02:55:16.349202 1135902 kic.go:203] duration metric: took 4.277532 seconds to extract preloaded images to volume
	W0214 02:55:16.349352 1135902 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 02:55:16.349485 1135902 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 02:55:16.407151 1135902 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-107916 --name addons-107916 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-107916 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-107916 --network addons-107916 --ip 192.168.49.2 --volume addons-107916:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0214 02:55:16.704599 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Running}}
	I0214 02:55:16.730128 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:16.756066 1135902 cli_runner.go:164] Run: docker exec addons-107916 stat /var/lib/dpkg/alternatives/iptables
	I0214 02:55:16.818054 1135902 oci.go:144] the created container "addons-107916" has a running status.
	I0214 02:55:16.818086 1135902 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa...
	I0214 02:55:17.282809 1135902 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 02:55:17.310707 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:17.330723 1135902 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 02:55:17.330749 1135902 kic_runner.go:114] Args: [docker exec --privileged addons-107916 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 02:55:17.405158 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:17.432937 1135902 machine.go:88] provisioning docker machine ...
	I0214 02:55:17.432970 1135902 ubuntu.go:169] provisioning hostname "addons-107916"
	I0214 02:55:17.433044 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:17.460304 1135902 main.go:141] libmachine: Using SSH client type: native
	I0214 02:55:17.460728 1135902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34032 <nil> <nil>}
	I0214 02:55:17.460746 1135902 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-107916 && echo "addons-107916" | sudo tee /etc/hostname
	I0214 02:55:17.641124 1135902 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-107916
	
	I0214 02:55:17.641205 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:17.663986 1135902 main.go:141] libmachine: Using SSH client type: native
	I0214 02:55:17.664399 1135902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34032 <nil> <nil>}
	I0214 02:55:17.664420 1135902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-107916' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-107916/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-107916' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 02:55:17.803964 1135902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 02:55:17.804032 1135902 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18166-1129740/.minikube CaCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18166-1129740/.minikube}
	I0214 02:55:17.804067 1135902 ubuntu.go:177] setting up certificates
	I0214 02:55:17.804105 1135902 provision.go:83] configureAuth start
	I0214 02:55:17.804229 1135902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-107916
	I0214 02:55:17.821292 1135902 provision.go:138] copyHostCerts
	I0214 02:55:17.821374 1135902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem (1082 bytes)
	I0214 02:55:17.821507 1135902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem (1123 bytes)
	I0214 02:55:17.821567 1135902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem (1675 bytes)
	I0214 02:55:17.821608 1135902 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem org=jenkins.addons-107916 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-107916]
	I0214 02:55:18.013638 1135902 provision.go:172] copyRemoteCerts
	I0214 02:55:18.013724 1135902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 02:55:18.013782 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.031376 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.132752 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 02:55:18.158097 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0214 02:55:18.181976 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 02:55:18.206484 1135902 provision.go:86] duration metric: configureAuth took 402.348621ms
	I0214 02:55:18.206511 1135902 ubuntu.go:193] setting minikube options for container-runtime
	I0214 02:55:18.206705 1135902 config.go:182] Loaded profile config "addons-107916": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 02:55:18.206724 1135902 machine.go:91] provisioned docker machine in 773.76556ms
	I0214 02:55:18.206731 1135902 client.go:171] LocalClient.Create took 9.222338542s
	I0214 02:55:18.206750 1135902 start.go:167] duration metric: libmachine.API.Create for "addons-107916" took 9.222403253s
	I0214 02:55:18.206767 1135902 start.go:300] post-start starting for "addons-107916" (driver="docker")
	I0214 02:55:18.206777 1135902 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 02:55:18.206838 1135902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 02:55:18.206890 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.223581 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.316993 1135902 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 02:55:18.320132 1135902 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 02:55:18.320172 1135902 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 02:55:18.320185 1135902 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 02:55:18.320194 1135902 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 02:55:18.320205 1135902 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/addons for local assets ...
	I0214 02:55:18.320277 1135902 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/files for local assets ...
	I0214 02:55:18.320318 1135902 start.go:303] post-start completed in 113.543978ms
	I0214 02:55:18.320643 1135902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-107916
	I0214 02:55:18.336202 1135902 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/config.json ...
	I0214 02:55:18.336511 1135902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 02:55:18.336567 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.353151 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.444544 1135902 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 02:55:18.448771 1135902 start.go:128] duration metric: createHost completed in 9.466988829s
	I0214 02:55:18.448808 1135902 start.go:83] releasing machines lock for "addons-107916", held for 9.467145117s
	I0214 02:55:18.448880 1135902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-107916
	I0214 02:55:18.466144 1135902 ssh_runner.go:195] Run: cat /version.json
	I0214 02:55:18.466196 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.466226 1135902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 02:55:18.466289 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.483661 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.495609 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.709217 1135902 ssh_runner.go:195] Run: systemctl --version
	I0214 02:55:18.713855 1135902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 02:55:18.718166 1135902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0214 02:55:18.745184 1135902 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0214 02:55:18.745276 1135902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 02:55:18.773038 1135902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 02:55:18.773067 1135902 start.go:475] detecting cgroup driver to use...
	I0214 02:55:18.773100 1135902 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 02:55:18.773164 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0214 02:55:18.785654 1135902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0214 02:55:18.797226 1135902 docker.go:217] disabling cri-docker service (if available) ...
	I0214 02:55:18.797333 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 02:55:18.811408 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 02:55:18.825877 1135902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 02:55:18.923004 1135902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 02:55:19.015460 1135902 docker.go:233] disabling docker service ...
	I0214 02:55:19.015599 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 02:55:19.036142 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 02:55:19.048070 1135902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 02:55:19.143298 1135902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 02:55:19.244823 1135902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 02:55:19.256366 1135902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 02:55:19.273124 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0214 02:55:19.283357 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0214 02:55:19.293838 1135902 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0214 02:55:19.293937 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0214 02:55:19.304263 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 02:55:19.314248 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0214 02:55:19.324356 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 02:55:19.334030 1135902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 02:55:19.343055 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0214 02:55:19.352583 1135902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 02:55:19.361621 1135902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 02:55:19.370237 1135902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 02:55:19.458134 1135902 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 02:55:19.588913 1135902 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0214 02:55:19.589086 1135902 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0214 02:55:19.592751 1135902 start.go:543] Will wait 60s for crictl version
	I0214 02:55:19.592866 1135902 ssh_runner.go:195] Run: which crictl
	I0214 02:55:19.596216 1135902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 02:55:19.632693 1135902 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0214 02:55:19.632780 1135902 ssh_runner.go:195] Run: containerd --version
	I0214 02:55:19.658820 1135902 ssh_runner.go:195] Run: containerd --version
	I0214 02:55:19.687121 1135902 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0214 02:55:19.688747 1135902 cli_runner.go:164] Run: docker network inspect addons-107916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 02:55:19.708514 1135902 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 02:55:19.712177 1135902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 02:55:19.723189 1135902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:55:19.723274 1135902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 02:55:19.760361 1135902 containerd.go:612] all images are preloaded for containerd runtime.
	I0214 02:55:19.760386 1135902 containerd.go:519] Images already preloaded, skipping extraction
	I0214 02:55:19.760449 1135902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 02:55:19.800968 1135902 containerd.go:612] all images are preloaded for containerd runtime.
	I0214 02:55:19.800991 1135902 cache_images.go:84] Images are preloaded, skipping loading
	I0214 02:55:19.801060 1135902 ssh_runner.go:195] Run: sudo crictl info
	I0214 02:55:19.837882 1135902 cni.go:84] Creating CNI manager for ""
	I0214 02:55:19.837908 1135902 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:55:19.837934 1135902 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 02:55:19.837954 1135902 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-107916 NodeName:addons-107916 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 02:55:19.838089 1135902 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-107916"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 02:55:19.838154 1135902 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-107916 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-107916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 02:55:19.838225 1135902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0214 02:55:19.847194 1135902 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 02:55:19.847335 1135902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 02:55:19.856419 1135902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0214 02:55:19.874573 1135902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 02:55:19.892732 1135902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0214 02:55:19.910887 1135902 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 02:55:19.914323 1135902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 02:55:19.924967 1135902 certs.go:56] Setting up /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916 for IP: 192.168.49.2
	I0214 02:55:19.925008 1135902 certs.go:190] acquiring lock for shared ca certs: {Name:mk121f32762802a204d98d3cbcae9456442a0756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:19.925136 1135902 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key
	I0214 02:55:20.298274 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt ...
	I0214 02:55:20.298308 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt: {Name:mk9232405af826090594a99131ef96f3d2514d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:20.298896 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key ...
	I0214 02:55:20.298913 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key: {Name:mk0a6668030acf9159a9780805dccc10fe597a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:20.299344 1135902 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key
	I0214 02:55:20.890828 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt ...
	I0214 02:55:20.890861 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt: {Name:mk7631a211312feb81d4799510095b7fb6aa8261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:20.891063 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key ...
	I0214 02:55:20.891076 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key: {Name:mk5f00b5fdab86a0dd1e9f950d80789e275fdf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:20.891671 1135902 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.key
	I0214 02:55:20.891696 1135902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt with IP's: []
	I0214 02:55:21.329028 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt ...
	I0214 02:55:21.329065 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: {Name:mk4747a3dcba367cabde1c402e50d60b2eb375db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.329884 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.key ...
	I0214 02:55:21.329905 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.key: {Name:mkbf9ed041b362f64d90580e1f3eb25eb63ebf27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.330003 1135902 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key.dd3b5fb2
	I0214 02:55:21.330024 1135902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0214 02:55:21.669274 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt.dd3b5fb2 ...
	I0214 02:55:21.669304 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt.dd3b5fb2: {Name:mk2b44d0c4dc0933fb45841c75231c5d8e6d48cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.670054 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key.dd3b5fb2 ...
	I0214 02:55:21.670074 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key.dd3b5fb2: {Name:mkcbf1578689c03c0cd0903526ade0fca40ace19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.670166 1135902 certs.go:337] copying /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt
	I0214 02:55:21.670255 1135902 certs.go:341] copying /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key
	I0214 02:55:21.670314 1135902 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.key
	I0214 02:55:21.670337 1135902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.crt with IP's: []
	I0214 02:55:21.856925 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.crt ...
	I0214 02:55:21.856955 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.crt: {Name:mk88b1eababa70355b7017dc59995e38bfcf3ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.857143 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.key ...
	I0214 02:55:21.857161 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.key: {Name:mk9439f0e09b9c8af830afe5024cc89054cbd6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.857854 1135902 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 02:55:21.857904 1135902 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem (1082 bytes)
	I0214 02:55:21.857930 1135902 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem (1123 bytes)
	I0214 02:55:21.857958 1135902 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem (1675 bytes)
	I0214 02:55:21.858632 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 02:55:21.883313 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 02:55:21.907956 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 02:55:21.932137 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 02:55:21.956222 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 02:55:21.980738 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 02:55:22.008569 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 02:55:22.034568 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 02:55:22.059291 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 02:55:22.084369 1135902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 02:55:22.102623 1135902 ssh_runner.go:195] Run: openssl version
	I0214 02:55:22.108124 1135902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 02:55:22.118099 1135902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:55:22.121809 1135902 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:55 /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:55:22.121894 1135902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:55:22.129182 1135902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 02:55:22.138907 1135902 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 02:55:22.142345 1135902 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0214 02:55:22.142394 1135902 kubeadm.go:404] StartCluster: {Name:addons-107916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-107916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:55:22.142472 1135902 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0214 02:55:22.142543 1135902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 02:55:22.180825 1135902 cri.go:89] found id: ""
	I0214 02:55:22.180909 1135902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 02:55:22.189940 1135902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 02:55:22.198702 1135902 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0214 02:55:22.198789 1135902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 02:55:22.207746 1135902 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 02:55:22.207799 1135902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 02:55:22.259341 1135902 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0214 02:55:22.259750 1135902 kubeadm.go:322] [preflight] Running pre-flight checks
	I0214 02:55:22.300839 1135902 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0214 02:55:22.300953 1135902 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0214 02:55:22.301012 1135902 kubeadm.go:322] OS: Linux
	I0214 02:55:22.301085 1135902 kubeadm.go:322] CGROUPS_CPU: enabled
	I0214 02:55:22.301154 1135902 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0214 02:55:22.301231 1135902 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0214 02:55:22.301299 1135902 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0214 02:55:22.301374 1135902 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0214 02:55:22.301443 1135902 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0214 02:55:22.301524 1135902 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0214 02:55:22.301595 1135902 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0214 02:55:22.301664 1135902 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0214 02:55:22.373581 1135902 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 02:55:22.373736 1135902 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 02:55:22.373856 1135902 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 02:55:22.606025 1135902 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 02:55:22.610703 1135902 out.go:204]   - Generating certificates and keys ...
	I0214 02:55:22.610839 1135902 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0214 02:55:22.610948 1135902 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0214 02:55:22.890283 1135902 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 02:55:23.112828 1135902 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0214 02:55:23.540747 1135902 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0214 02:55:23.909634 1135902 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0214 02:55:24.270156 1135902 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0214 02:55:24.270517 1135902 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-107916 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 02:55:25.374502 1135902 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0214 02:55:25.374907 1135902 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-107916 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 02:55:26.181751 1135902 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 02:55:26.794538 1135902 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 02:55:27.425780 1135902 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0214 02:55:27.425867 1135902 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 02:55:28.301630 1135902 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 02:55:28.843465 1135902 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 02:55:29.237342 1135902 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 02:55:29.534216 1135902 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 02:55:29.534916 1135902 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 02:55:29.537583 1135902 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 02:55:29.539786 1135902 out.go:204]   - Booting up control plane ...
	I0214 02:55:29.539883 1135902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 02:55:29.539960 1135902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 02:55:29.542108 1135902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 02:55:29.555846 1135902 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 02:55:29.556908 1135902 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 02:55:29.557073 1135902 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0214 02:55:29.658910 1135902 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 02:55:38.163298 1135902 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504423 seconds
	I0214 02:55:38.163413 1135902 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 02:55:38.177926 1135902 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 02:55:38.701895 1135902 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 02:55:38.702086 1135902 kubeadm.go:322] [mark-control-plane] Marking the node addons-107916 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 02:55:39.215668 1135902 kubeadm.go:322] [bootstrap-token] Using token: xft5s7.209qx6e1eqh56ont
	I0214 02:55:39.217609 1135902 out.go:204]   - Configuring RBAC rules ...
	I0214 02:55:39.217761 1135902 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 02:55:39.223566 1135902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 02:55:39.233116 1135902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 02:55:39.237255 1135902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 02:55:39.241413 1135902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 02:55:39.245586 1135902 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 02:55:39.262817 1135902 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 02:55:39.505287 1135902 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0214 02:55:39.632080 1135902 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0214 02:55:39.633011 1135902 kubeadm.go:322] 
	I0214 02:55:39.633084 1135902 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0214 02:55:39.633091 1135902 kubeadm.go:322] 
	I0214 02:55:39.633163 1135902 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0214 02:55:39.633168 1135902 kubeadm.go:322] 
	I0214 02:55:39.633192 1135902 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0214 02:55:39.633247 1135902 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 02:55:39.633302 1135902 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 02:55:39.633307 1135902 kubeadm.go:322] 
	I0214 02:55:39.633357 1135902 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0214 02:55:39.633364 1135902 kubeadm.go:322] 
	I0214 02:55:39.633409 1135902 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 02:55:39.633413 1135902 kubeadm.go:322] 
	I0214 02:55:39.633464 1135902 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0214 02:55:39.633542 1135902 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 02:55:39.633606 1135902 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 02:55:39.633613 1135902 kubeadm.go:322] 
	I0214 02:55:39.633691 1135902 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 02:55:39.633762 1135902 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0214 02:55:39.633767 1135902 kubeadm.go:322] 
	I0214 02:55:39.633845 1135902 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xft5s7.209qx6e1eqh56ont \
	I0214 02:55:39.633943 1135902 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d3f320a98a2f1022ee1a4d9bbdd9d3ce0ce634a8fab1d54ded076f0a14b0e04e \
	I0214 02:55:39.633963 1135902 kubeadm.go:322] 	--control-plane 
	I0214 02:55:39.633968 1135902 kubeadm.go:322] 
	I0214 02:55:39.634046 1135902 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0214 02:55:39.634051 1135902 kubeadm.go:322] 
	I0214 02:55:39.634127 1135902 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xft5s7.209qx6e1eqh56ont \
	I0214 02:55:39.634222 1135902 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d3f320a98a2f1022ee1a4d9bbdd9d3ce0ce634a8fab1d54ded076f0a14b0e04e 
	I0214 02:55:39.640063 1135902 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0214 02:55:39.640180 1135902 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 02:55:39.640354 1135902 cni.go:84] Creating CNI manager for ""
	I0214 02:55:39.640383 1135902 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:55:39.643329 1135902 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 02:55:39.645227 1135902 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 02:55:39.658702 1135902 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0214 02:55:39.658720 1135902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 02:55:39.693456 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 02:55:40.696947 1135902 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.003451432s)
	I0214 02:55:40.697004 1135902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 02:55:40.697122 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:40.697230 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=40f210e92693e4612e04be0697de06db21ac5cf0 minikube.k8s.io/name=addons-107916 minikube.k8s.io/updated_at=2024_02_14T02_55_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:40.890126 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:40.890237 1135902 ops.go:34] apiserver oom_adj: -16
	I0214 02:55:41.390461 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:41.890277 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:42.391208 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:42.890848 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:43.390277 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:43.890995 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:44.391082 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:44.890373 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:45.391102 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:45.890916 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:46.390775 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:46.890727 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:47.390278 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:47.890512 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:48.391080 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:48.891160 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:49.390318 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:49.890811 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:50.390280 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:50.890611 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:51.390564 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:51.890255 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:51.980427 1135902 kubeadm.go:1088] duration metric: took 11.283354795s to wait for elevateKubeSystemPrivileges.
	I0214 02:55:51.980459 1135902 kubeadm.go:406] StartCluster complete in 29.838068306s
	I0214 02:55:51.980477 1135902 settings.go:142] acquiring lock: {Name:mkcc971fda27c724b3c1908f1b3da87aea10d784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:51.980597 1135902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 02:55:51.980988 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/kubeconfig: {Name:mkc9d4ef83ac02b186254a828f8611428408dff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:51.981639 1135902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 02:55:51.981938 1135902 config.go:182] Loaded profile config "addons-107916": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 02:55:51.982100 1135902 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0214 02:55:51.982199 1135902 addons.go:69] Setting yakd=true in profile "addons-107916"
	I0214 02:55:51.982214 1135902 addons.go:234] Setting addon yakd=true in "addons-107916"
	I0214 02:55:51.982249 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.982688 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.983188 1135902 addons.go:69] Setting cloud-spanner=true in profile "addons-107916"
	I0214 02:55:51.983207 1135902 addons.go:234] Setting addon cloud-spanner=true in "addons-107916"
	I0214 02:55:51.983247 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.983663 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.983862 1135902 addons.go:69] Setting metrics-server=true in profile "addons-107916"
	I0214 02:55:51.983880 1135902 addons.go:234] Setting addon metrics-server=true in "addons-107916"
	I0214 02:55:51.983911 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.984307 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.984694 1135902 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-107916"
	I0214 02:55:51.984736 1135902 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-107916"
	I0214 02:55:51.984774 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.985149 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.989762 1135902 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-107916"
	I0214 02:55:51.991034 1135902 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-107916"
	I0214 02:55:51.991120 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.991737 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.994102 1135902 addons.go:69] Setting default-storageclass=true in profile "addons-107916"
	I0214 02:55:51.994131 1135902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-107916"
	I0214 02:55:51.994460 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.004391 1135902 addons.go:69] Setting registry=true in profile "addons-107916"
	I0214 02:55:52.004835 1135902 addons.go:234] Setting addon registry=true in "addons-107916"
	I0214 02:55:52.005038 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.017003 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.005191 1135902 addons.go:69] Setting storage-provisioner=true in profile "addons-107916"
	I0214 02:55:52.035047 1135902 addons.go:234] Setting addon storage-provisioner=true in "addons-107916"
	I0214 02:55:52.035212 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.035739 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.005212 1135902 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-107916"
	I0214 02:55:52.054516 1135902 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-107916"
	I0214 02:55:52.054948 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.005221 1135902 addons.go:69] Setting volumesnapshots=true in profile "addons-107916"
	I0214 02:55:52.088529 1135902 addons.go:234] Setting addon volumesnapshots=true in "addons-107916"
	I0214 02:55:52.088612 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.089240 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.014818 1135902 addons.go:69] Setting gcp-auth=true in profile "addons-107916"
	I0214 02:55:52.122831 1135902 mustload.go:65] Loading cluster: addons-107916
	I0214 02:55:52.127532 1135902 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0214 02:55:52.129277 1135902 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0214 02:55:52.129296 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0214 02:55:52.129374 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.123546 1135902 config.go:182] Loaded profile config "addons-107916": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 02:55:52.144125 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.014952 1135902 addons.go:69] Setting ingress-dns=true in profile "addons-107916"
	I0214 02:55:52.014957 1135902 addons.go:69] Setting inspektor-gadget=true in profile "addons-107916"
	I0214 02:55:52.014942 1135902 addons.go:69] Setting ingress=true in profile "addons-107916"
	I0214 02:55:52.159720 1135902 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0214 02:55:52.160499 1135902 addons.go:234] Setting addon ingress-dns=true in "addons-107916"
	I0214 02:55:52.167776 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.160521 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0214 02:55:52.171036 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0214 02:55:52.167780 1135902 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0214 02:55:52.168232 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.160538 1135902 addons.go:234] Setting addon inspektor-gadget=true in "addons-107916"
	I0214 02:55:52.160543 1135902 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0214 02:55:52.160530 1135902 addons.go:234] Setting addon ingress=true in "addons-107916"
	I0214 02:55:52.185014 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.185467 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.217014 1135902 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0214 02:55:52.239341 1135902 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 02:55:52.239364 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0214 02:55:52.239434 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.217118 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0214 02:55:52.244444 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.269153 1135902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 02:55:52.218043 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.217959 1135902 addons.go:234] Setting addon default-storageclass=true in "addons-107916"
	I0214 02:55:52.274956 1135902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 02:55:52.274967 1135902 out.go:177]   - Using image docker.io/registry:2.8.3
	I0214 02:55:52.274971 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0214 02:55:52.275418 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.276879 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0214 02:55:52.276897 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0214 02:55:52.276962 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.277301 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.277782 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.293370 1135902 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0214 02:55:52.279383 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 02:55:52.306987 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0214 02:55:52.309093 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0214 02:55:52.316899 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0214 02:55:52.307280 1135902 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0214 02:55:52.307350 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.336829 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0214 02:55:52.324464 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0214 02:55:52.339675 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.341212 1135902 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-107916"
	I0214 02:55:52.341252 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.341728 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.394056 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0214 02:55:52.389998 1135902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 02:55:52.391199 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.399640 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.411723 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0214 02:55:52.425713 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0214 02:55:52.425787 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0214 02:55:52.431678 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0214 02:55:52.431703 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0214 02:55:52.431768 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.426880 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.483937 1135902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0214 02:55:52.487674 1135902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0214 02:55:52.490607 1135902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0214 02:55:52.493653 1135902 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 02:55:52.493674 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0214 02:55:52.493737 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.496190 1135902 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0214 02:55:52.490999 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.505731 1135902 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 02:55:52.505751 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0214 02:55:52.505818 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.532284 1135902 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-107916" context rescaled to 1 replicas
	I0214 02:55:52.532323 1135902 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0214 02:55:52.538194 1135902 out.go:177] * Verifying Kubernetes components...
	I0214 02:55:52.540378 1135902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 02:55:52.541876 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.541912 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.543038 1135902 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0214 02:55:52.547302 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0214 02:55:52.547323 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0214 02:55:52.547395 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.571990 1135902 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 02:55:52.572011 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 02:55:52.572074 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.599987 1135902 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0214 02:55:52.602269 1135902 out.go:177]   - Using image docker.io/busybox:stable
	I0214 02:55:52.604775 1135902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 02:55:52.604799 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0214 02:55:52.604865 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.643575 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.646287 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.683487 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.713985 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.723867 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.727585 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.730090 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.733766 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.752453 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	W0214 02:55:52.774125 1135902 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0214 02:55:52.774157 1135902 retry.go:31] will retry after 369.891388ms: ssh: handshake failed: EOF
	I0214 02:55:53.271006 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 02:55:53.273753 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 02:55:53.357939 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0214 02:55:53.381564 1135902 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0214 02:55:53.381634 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0214 02:55:53.429385 1135902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0214 02:55:53.429455 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0214 02:55:53.482926 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0214 02:55:53.482998 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0214 02:55:53.497479 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 02:55:53.517799 1135902 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0214 02:55:53.517878 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0214 02:55:53.578106 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 02:55:53.602517 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 02:55:53.628104 1135902 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0214 02:55:53.628176 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0214 02:55:53.640631 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0214 02:55:53.640708 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0214 02:55:53.709364 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0214 02:55:53.709438 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0214 02:55:53.714137 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0214 02:55:53.714211 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0214 02:55:53.758073 1135902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0214 02:55:53.758150 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0214 02:55:53.784104 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 02:55:53.876365 1135902 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0214 02:55:53.876457 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0214 02:55:53.907710 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0214 02:55:53.987670 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0214 02:55:53.987695 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0214 02:55:54.029221 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0214 02:55:54.029288 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0214 02:55:54.046469 1135902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 02:55:54.046540 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0214 02:55:54.066690 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0214 02:55:54.066763 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0214 02:55:54.119068 1135902 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0214 02:55:54.119146 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0214 02:55:54.187677 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0214 02:55:54.187752 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0214 02:55:54.199173 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0214 02:55:54.199247 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0214 02:55:54.295839 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0214 02:55:54.295910 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0214 02:55:54.297393 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 02:55:54.342061 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0214 02:55:54.342125 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0214 02:55:54.428029 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0214 02:55:54.428102 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0214 02:55:54.470281 1135902 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:55:54.470356 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0214 02:55:54.490323 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0214 02:55:54.491109 1135902 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.950687711s)
	I0214 02:55:54.492036 1135902 node_ready.go:35] waiting up to 6m0s for node "addons-107916" to be "Ready" ...
	I0214 02:55:54.495915 1135902 node_ready.go:49] node "addons-107916" has status "Ready":"True"
	I0214 02:55:54.495980 1135902 node_ready.go:38] duration metric: took 3.894354ms waiting for node "addons-107916" to be "Ready" ...
	I0214 02:55:54.496005 1135902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 02:55:54.496601 1135902 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.090794616s)
	I0214 02:55:54.496648 1135902 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0214 02:55:54.505561 1135902 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace to be "Ready" ...
	I0214 02:55:54.577614 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0214 02:55:54.577686 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0214 02:55:54.639962 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:55:54.666587 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0214 02:55:54.666659 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0214 02:55:54.789011 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0214 02:55:54.789083 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0214 02:55:54.937866 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0214 02:55:54.937939 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0214 02:55:55.085340 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.814283827s)
	I0214 02:55:55.126619 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0214 02:55:55.126709 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0214 02:55:55.246750 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0214 02:55:55.246821 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0214 02:55:55.282344 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0214 02:55:55.282425 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0214 02:55:55.307449 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 02:55:55.307556 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0214 02:55:55.412017 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0214 02:55:55.412089 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0214 02:55:55.664078 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 02:55:55.709152 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0214 02:55:55.709226 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0214 02:55:55.879534 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0214 02:55:56.518275 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:55:58.546610 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:55:59.225055 1135902 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0214 02:55:59.225136 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:59.245009 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:59.562260 1135902 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0214 02:55:59.703677 1135902 addons.go:234] Setting addon gcp-auth=true in "addons-107916"
	I0214 02:55:59.703754 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:59.704213 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:59.727873 1135902 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0214 02:55:59.727932 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:59.747597 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:59.946833 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.673042275s)
	I0214 02:55:59.946866 1135902 addons.go:470] Verifying addon ingress=true in "addons-107916"
	I0214 02:55:59.949783 1135902 out.go:177] * Verifying ingress addon...
	I0214 02:55:59.947054 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.589046501s)
	I0214 02:55:59.947115 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.449557178s)
	I0214 02:55:59.947148 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.369022704s)
	I0214 02:55:59.947177 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.344642223s)
	I0214 02:55:59.947217 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.163045565s)
	I0214 02:55:59.947311 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.64985113s)
	I0214 02:55:59.947349 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.456962102s)
	I0214 02:55:59.947420 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.307382914s)
	I0214 02:55:59.947430 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.039462724s)
	I0214 02:55:59.952277 1135902 addons.go:470] Verifying addon registry=true in "addons-107916"
	I0214 02:55:59.955117 1135902 out.go:177] * Verifying registry addon...
	I0214 02:55:59.953392 1135902 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0214 02:55:59.953430 1135902 addons.go:470] Verifying addon metrics-server=true in "addons-107916"
	W0214 02:55:59.953611 1135902 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 02:55:59.959274 1135902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0214 02:55:59.961054 1135902 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-107916 service yakd-dashboard -n yakd-dashboard
	
	I0214 02:55:59.961181 1135902 retry.go:31] will retry after 361.504562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 02:55:59.970823 1135902 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0214 02:55:59.970859 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:55:59.984282 1135902 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0214 02:55:59.984309 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0214 02:55:59.984841 1135902 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0214 02:56:00.325597 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:56:00.466814 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:00.468640 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:00.963439 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:00.971531 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:01.023525 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:01.479182 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:01.487008 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:01.697656 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.033371174s)
	I0214 02:56:01.697746 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.818027711s)
	I0214 02:56:01.697692 1135902 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-107916"
	I0214 02:56:01.697905 1135902 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.969904033s)
	I0214 02:56:01.700242 1135902 out.go:177] * Verifying csi-hostpath-driver addon...
	I0214 02:56:01.702303 1135902 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0214 02:56:01.703315 1135902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0214 02:56:01.707863 1135902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0214 02:56:01.710064 1135902 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0214 02:56:01.710093 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0214 02:56:01.716919 1135902 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0214 02:56:01.716950 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:01.778119 1135902 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0214 02:56:01.778190 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0214 02:56:01.830300 1135902 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 02:56:01.830388 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0214 02:56:01.881279 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 02:56:01.963933 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:01.969620 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:02.211864 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:02.464112 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:02.469209 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:02.575146 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.249478297s)
	I0214 02:56:02.711362 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:02.967514 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:02.976360 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:03.050248 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:03.061327 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.179959145s)
	I0214 02:56:03.064152 1135902 addons.go:470] Verifying addon gcp-auth=true in "addons-107916"
	I0214 02:56:03.066372 1135902 out.go:177] * Verifying gcp-auth addon...
	I0214 02:56:03.070340 1135902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0214 02:56:03.080693 1135902 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0214 02:56:03.080720 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:03.212449 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:03.463188 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:03.467335 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:03.574875 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:03.711065 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:03.963187 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:03.966628 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:04.074317 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:04.211633 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:04.468620 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:04.476510 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:04.574757 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:04.711748 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:04.965112 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:04.968873 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:05.075297 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:05.210703 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:05.475389 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:05.477316 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:05.512931 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:05.574638 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:05.712092 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:05.963665 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:05.966460 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:06.075175 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:06.211277 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:06.464419 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:06.467032 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:06.574267 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:06.710889 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:06.969464 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:06.970426 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:07.076907 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:07.211917 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:07.463693 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:07.467165 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:07.513047 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:07.574411 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:07.712164 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:07.965219 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:07.967572 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:08.075067 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:08.211975 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:08.463887 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:08.467745 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:08.574794 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:08.712120 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:08.964018 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:08.969298 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:09.075598 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:09.212453 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:09.464794 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:09.469866 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:09.513608 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:09.574814 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:09.723571 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:09.964946 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:09.968427 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:10.075434 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:10.212019 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:10.463700 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:10.467203 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:10.579827 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:10.711696 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:10.963727 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:10.966068 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:11.074612 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:11.212352 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:11.467740 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:11.470207 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:11.575440 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:11.711123 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:11.969105 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:11.970599 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:12.017942 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:12.074806 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:12.210941 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:12.468556 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:12.469016 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:12.574063 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:12.712483 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:12.963008 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:12.967213 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:13.074677 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:13.211512 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:13.463240 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:13.466768 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:13.574788 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:13.710803 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:13.963776 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:13.965515 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:14.074623 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:14.212141 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:14.462531 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:14.466716 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:14.512466 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:14.574238 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:14.710917 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:14.964653 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:14.965821 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:15.074890 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:15.211384 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:15.463733 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:15.468088 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:15.574659 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:15.711162 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:15.963236 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:15.966352 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:16.074601 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:16.211050 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:16.463768 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:16.467175 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:16.512872 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:16.574742 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:16.711555 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:16.963724 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:16.967539 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:17.074120 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:17.211908 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:17.465784 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:17.466911 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:17.574710 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:17.711470 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:17.962785 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:17.965698 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:18.078143 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:18.211301 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:18.462396 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:18.466702 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:18.574346 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:18.711319 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:18.962725 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:18.965763 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:19.013149 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:19.074339 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:19.211320 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:19.463665 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:19.466734 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:19.574365 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:19.712460 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:19.966550 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:19.967611 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:20.075841 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:20.212454 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:20.463304 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:20.467178 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:20.576374 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:20.712490 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:20.962947 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:20.966036 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:21.074330 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:21.211227 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:21.462655 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:21.466049 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:21.512524 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:21.574041 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:21.711453 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:21.962932 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:21.965673 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:22.074722 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:22.211004 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:22.465491 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:22.466499 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:22.574833 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:22.711443 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:22.963855 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:22.968059 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:23.074105 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:23.211443 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:23.464424 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:23.466625 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:23.512649 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:23.574010 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:23.711887 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:23.963963 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:23.966286 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:24.074532 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:24.211525 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:24.467239 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:24.468193 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:24.575134 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:24.713028 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:24.965216 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:24.967554 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:25.082074 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:25.211722 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:25.463195 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:25.467121 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:25.575322 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:25.714198 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:25.968557 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:25.973101 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:26.013660 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:26.075341 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:26.211618 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:26.463171 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:26.467701 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:26.574606 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:26.711117 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:26.964479 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:26.966544 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:27.074750 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:27.211693 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:27.463994 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:27.466476 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:27.573867 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:27.711663 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:27.962909 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:27.966029 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:28.015839 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:28.077461 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:28.211627 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:28.463906 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:28.467143 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:28.575147 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:28.714141 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:28.965776 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:28.969632 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:29.074191 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:29.220493 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:29.463121 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:29.466030 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:29.574592 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:29.711967 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:29.964513 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:29.969249 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:30.034927 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:30.075603 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:30.217450 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:30.462879 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:30.467528 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:30.574747 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:30.711958 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:30.974600 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:31.001129 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:31.074746 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:31.211466 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:31.464135 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:31.468570 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:31.574671 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:31.711810 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:31.964240 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:31.971575 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:32.075151 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:32.211550 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:32.463905 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:32.469287 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:32.513999 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:32.574915 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:32.711454 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:32.965556 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:32.970305 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:33.089025 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:33.211799 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:33.463643 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:33.467822 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:33.513207 1135902 pod_ready.go:92] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.513233 1135902 pod_ready.go:81] duration metric: took 39.00759531s waiting for pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.513245 1135902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.519385 1135902 pod_ready.go:92] pod "etcd-addons-107916" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.519409 1135902 pod_ready.go:81] duration metric: took 6.155504ms waiting for pod "etcd-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.519423 1135902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.525907 1135902 pod_ready.go:92] pod "kube-apiserver-addons-107916" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.525934 1135902 pod_ready.go:81] duration metric: took 6.501612ms waiting for pod "kube-apiserver-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.525946 1135902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.532317 1135902 pod_ready.go:92] pod "kube-controller-manager-addons-107916" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.532342 1135902 pod_ready.go:81] duration metric: took 6.388105ms waiting for pod "kube-controller-manager-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.532353 1135902 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wqqx2" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.538857 1135902 pod_ready.go:92] pod "kube-proxy-wqqx2" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.538884 1135902 pod_ready.go:81] duration metric: took 6.52237ms waiting for pod "kube-proxy-wqqx2" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.538896 1135902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.575137 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:33.716219 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:33.910518 1135902 pod_ready.go:92] pod "kube-scheduler-addons-107916" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.910547 1135902 pod_ready.go:81] duration metric: took 371.643092ms waiting for pod "kube-scheduler-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.910559 1135902 pod_ready.go:38] duration metric: took 39.414528016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 02:56:33.910633 1135902 api_server.go:52] waiting for apiserver process to appear ...
	I0214 02:56:33.910723 1135902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 02:56:33.926908 1135902 api_server.go:72] duration metric: took 41.394553358s to wait for apiserver process to appear ...
	I0214 02:56:33.926936 1135902 api_server.go:88] waiting for apiserver healthz status ...
	I0214 02:56:33.926957 1135902 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0214 02:56:33.936854 1135902 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0214 02:56:33.938451 1135902 api_server.go:141] control plane version: v1.28.4
	I0214 02:56:33.938480 1135902 api_server.go:131] duration metric: took 11.535734ms to wait for apiserver health ...
	I0214 02:56:33.938489 1135902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 02:56:33.963864 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:33.968450 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:34.074759 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:34.118074 1135902 system_pods.go:59] 18 kube-system pods found
	I0214 02:56:34.118111 1135902 system_pods.go:61] "coredns-5dd5756b68-frpgv" [725a8f05-de51-4e9b-b8c0-8c1c0c28b9d8] Running
	I0214 02:56:34.118121 1135902 system_pods.go:61] "csi-hostpath-attacher-0" [f819a79e-8675-4723-8c63-c8c1c0564130] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 02:56:34.118131 1135902 system_pods.go:61] "csi-hostpath-resizer-0" [fbb76b19-2a31-4945-96a0-3cecc97d33ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 02:56:34.118141 1135902 system_pods.go:61] "csi-hostpathplugin-5fqvb" [33dff35f-ee07-4587-89ba-846f0bee07db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 02:56:34.118153 1135902 system_pods.go:61] "etcd-addons-107916" [1fe5f000-92d4-47ad-aba2-3ba7e884263e] Running
	I0214 02:56:34.118166 1135902 system_pods.go:61] "kindnet-rthjj" [75af4cf2-01b1-4dca-9bfd-7c24b3dc528e] Running
	I0214 02:56:34.118171 1135902 system_pods.go:61] "kube-apiserver-addons-107916" [36c01377-8ded-4c47-8178-340afadcc26c] Running
	I0214 02:56:34.118179 1135902 system_pods.go:61] "kube-controller-manager-addons-107916" [05719c3d-db88-46b8-bcb5-a11b50f1a47b] Running
	I0214 02:56:34.118187 1135902 system_pods.go:61] "kube-ingress-dns-minikube" [6015e7be-aeae-4d2f-a1ee-3f92e61da1e5] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 02:56:34.118192 1135902 system_pods.go:61] "kube-proxy-wqqx2" [2628f6b4-92c2-45ac-8ef2-cd9a32918e0b] Running
	I0214 02:56:34.118202 1135902 system_pods.go:61] "kube-scheduler-addons-107916" [ea27f96b-bb9c-4fe5-b4dc-41a0dd834064] Running
	I0214 02:56:34.118209 1135902 system_pods.go:61] "metrics-server-69cf46c98-xgpcx" [a75b205a-055e-4b2e-82c2-53e542d18ae2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 02:56:34.118250 1135902 system_pods.go:61] "nvidia-device-plugin-daemonset-qp5mc" [e83ab22d-76cc-418f-9a1e-704888f17ca0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 02:56:34.118265 1135902 system_pods.go:61] "registry-proxy-4vspg" [ed6185ac-833d-49bc-9dbd-44ca26c256ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 02:56:34.118271 1135902 system_pods.go:61] "registry-vq7pw" [76ecca74-b904-428a-957c-e497f46f916d] Running
	I0214 02:56:34.118282 1135902 system_pods.go:61] "snapshot-controller-58dbcc7b99-6tw9t" [a3c8748f-3bbf-450a-8ab8-f682dc3540b3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 02:56:34.118290 1135902 system_pods.go:61] "snapshot-controller-58dbcc7b99-mxxxv" [05d7fa06-002b-46bb-bfca-2acdd4c8d6c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 02:56:34.118299 1135902 system_pods.go:61] "storage-provisioner" [2c9e38cc-5e48-4667-a0a7-9ac74e980de2] Running
	I0214 02:56:34.118306 1135902 system_pods.go:74] duration metric: took 179.811425ms to wait for pod list to return data ...
	I0214 02:56:34.118318 1135902 default_sa.go:34] waiting for default service account to be created ...
	I0214 02:56:34.212384 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:34.311229 1135902 default_sa.go:45] found service account: "default"
	I0214 02:56:34.311258 1135902 default_sa.go:55] duration metric: took 192.932347ms for default service account to be created ...
	I0214 02:56:34.311277 1135902 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 02:56:34.465096 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:34.467847 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:34.518648 1135902 system_pods.go:86] 18 kube-system pods found
	I0214 02:56:34.518680 1135902 system_pods.go:89] "coredns-5dd5756b68-frpgv" [725a8f05-de51-4e9b-b8c0-8c1c0c28b9d8] Running
	I0214 02:56:34.518690 1135902 system_pods.go:89] "csi-hostpath-attacher-0" [f819a79e-8675-4723-8c63-c8c1c0564130] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 02:56:34.518699 1135902 system_pods.go:89] "csi-hostpath-resizer-0" [fbb76b19-2a31-4945-96a0-3cecc97d33ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 02:56:34.518710 1135902 system_pods.go:89] "csi-hostpathplugin-5fqvb" [33dff35f-ee07-4587-89ba-846f0bee07db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 02:56:34.518717 1135902 system_pods.go:89] "etcd-addons-107916" [1fe5f000-92d4-47ad-aba2-3ba7e884263e] Running
	I0214 02:56:34.518727 1135902 system_pods.go:89] "kindnet-rthjj" [75af4cf2-01b1-4dca-9bfd-7c24b3dc528e] Running
	I0214 02:56:34.518733 1135902 system_pods.go:89] "kube-apiserver-addons-107916" [36c01377-8ded-4c47-8178-340afadcc26c] Running
	I0214 02:56:34.518742 1135902 system_pods.go:89] "kube-controller-manager-addons-107916" [05719c3d-db88-46b8-bcb5-a11b50f1a47b] Running
	I0214 02:56:34.518750 1135902 system_pods.go:89] "kube-ingress-dns-minikube" [6015e7be-aeae-4d2f-a1ee-3f92e61da1e5] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 02:56:34.518756 1135902 system_pods.go:89] "kube-proxy-wqqx2" [2628f6b4-92c2-45ac-8ef2-cd9a32918e0b] Running
	I0214 02:56:34.518764 1135902 system_pods.go:89] "kube-scheduler-addons-107916" [ea27f96b-bb9c-4fe5-b4dc-41a0dd834064] Running
	I0214 02:56:34.518771 1135902 system_pods.go:89] "metrics-server-69cf46c98-xgpcx" [a75b205a-055e-4b2e-82c2-53e542d18ae2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 02:56:34.518782 1135902 system_pods.go:89] "nvidia-device-plugin-daemonset-qp5mc" [e83ab22d-76cc-418f-9a1e-704888f17ca0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 02:56:34.518789 1135902 system_pods.go:89] "registry-proxy-4vspg" [ed6185ac-833d-49bc-9dbd-44ca26c256ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 02:56:34.518796 1135902 system_pods.go:89] "registry-vq7pw" [76ecca74-b904-428a-957c-e497f46f916d] Running
	I0214 02:56:34.518803 1135902 system_pods.go:89] "snapshot-controller-58dbcc7b99-6tw9t" [a3c8748f-3bbf-450a-8ab8-f682dc3540b3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 02:56:34.518811 1135902 system_pods.go:89] "snapshot-controller-58dbcc7b99-mxxxv" [05d7fa06-002b-46bb-bfca-2acdd4c8d6c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 02:56:34.518818 1135902 system_pods.go:89] "storage-provisioner" [2c9e38cc-5e48-4667-a0a7-9ac74e980de2] Running
	I0214 02:56:34.518826 1135902 system_pods.go:126] duration metric: took 207.5432ms to wait for k8s-apps to be running ...
	I0214 02:56:34.518838 1135902 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 02:56:34.518898 1135902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 02:56:34.534371 1135902 system_svc.go:56] duration metric: took 15.522033ms WaitForService to wait for kubelet.
	I0214 02:56:34.534404 1135902 kubeadm.go:581] duration metric: took 42.002049987s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 02:56:34.534446 1135902 node_conditions.go:102] verifying NodePressure condition ...
	I0214 02:56:34.573739 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:34.713804 1135902 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 02:56:34.713845 1135902 node_conditions.go:123] node cpu capacity is 2
	I0214 02:56:34.713859 1135902 node_conditions.go:105] duration metric: took 179.403403ms to run NodePressure ...
	I0214 02:56:34.713871 1135902 start.go:228] waiting for startup goroutines ...
	I0214 02:56:34.715360 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:34.963963 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:34.966197 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:35.075715 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:35.211898 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:35.463303 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:35.466302 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:35.573929 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:35.711879 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:35.964996 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:35.966118 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:36.074398 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:36.212398 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:36.465560 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:36.467207 1135902 kapi.go:107] duration metric: took 36.507928754s to wait for kubernetes.io/minikube-addons=registry ...
	I0214 02:56:36.574686 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:36.711507 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:36.963141 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:37.075628 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:37.212307 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:37.463151 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:37.575166 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:37.712211 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:37.963529 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:38.075133 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:38.213023 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:38.464305 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:38.574829 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:38.711440 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:38.962956 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:39.081028 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:39.218968 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:39.464684 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:39.581641 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:39.711520 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:39.963217 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:40.076108 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:40.211929 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:40.463368 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:40.574458 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:40.719834 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:40.967016 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:41.074781 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:41.211201 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:41.464163 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:41.574839 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:41.712230 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:41.964017 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:42.075693 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:42.213423 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:42.463349 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:42.574527 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:42.712675 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:42.963568 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:43.074302 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:43.211719 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:43.463051 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:43.574951 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:43.713806 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:43.963209 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:44.074808 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:44.212950 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:44.464019 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:44.575081 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:44.711252 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:44.963193 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:45.085170 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:45.225919 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:45.462891 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:45.574838 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:45.712162 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:45.963527 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:46.074707 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:46.213851 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:46.463560 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:46.574676 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:46.717508 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:46.963643 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:47.082366 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:47.211109 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:47.464115 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:47.575202 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:47.715260 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:47.963534 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:48.074903 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:48.211309 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:48.463736 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:48.574370 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:48.711381 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:48.963519 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:49.082174 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:49.212046 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:49.463631 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:49.574376 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:49.719827 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:49.963800 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:50.085989 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:50.213191 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:50.464265 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:50.574917 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:50.712277 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:50.965760 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:51.074635 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:51.214381 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:51.465277 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:51.574148 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:51.711258 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:51.963194 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:52.074340 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:52.211751 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:52.463308 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:52.574217 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:52.712321 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:52.962706 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:53.074261 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:53.210449 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:53.464591 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:53.574015 1135902 kapi.go:107] duration metric: took 50.503674897s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0214 02:56:53.579059 1135902 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-107916 cluster.
	I0214 02:56:53.581165 1135902 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0214 02:56:53.583105 1135902 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0214 02:56:53.713600 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:53.963124 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:54.211650 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:54.462937 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:54.711280 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:54.962680 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:55.213776 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:55.463416 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:55.711676 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:55.963681 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:56.213205 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:56.462806 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:56.711346 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:56.963362 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:57.210681 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:57.463106 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:57.713730 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:57.963440 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:58.211333 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:58.477589 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:58.712168 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:58.966022 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:59.211773 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:59.462825 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:59.711553 1135902 kapi.go:107] duration metric: took 58.008239466s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0214 02:56:59.963678 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:00.465089 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:00.962524 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:01.463377 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:01.963205 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:02.463268 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:02.963749 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:03.463176 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:03.962891 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:04.463644 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:04.963638 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:05.462626 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:05.969143 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:06.463570 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:06.963613 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:07.465834 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:07.963195 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:08.463237 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:08.962760 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:09.465758 1135902 kapi.go:107] duration metric: took 1m9.512364179s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0214 02:57:09.467817 1135902 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0214 02:57:09.469746 1135902 addons.go:505] enable addons completed in 1m17.487639726s: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0214 02:57:09.469797 1135902 start.go:233] waiting for cluster config update ...
	I0214 02:57:09.469832 1135902 start.go:242] writing updated cluster config ...
	I0214 02:57:09.470179 1135902 ssh_runner.go:195] Run: rm -f paused
	I0214 02:57:09.824976 1135902 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0214 02:57:09.827770 1135902 out.go:177] * Done! kubectl is now configured to use "addons-107916" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	3b2cdb1a2ee3c       dd1b12fcb6097       6 seconds ago        Exited              hello-world-app                          2                   9eef6a590e24f       hello-world-app-5d77478584-vrjll
	1858190050b4d       d315ef79be32c       31 seconds ago       Running             nginx                                    0                   0e50a2d1a7dd8       nginx
	36f8987acedf5       fc9db2894f4e4       53 seconds ago       Exited              helper-pod                               0                   9aa54732e524e       helper-pod-delete-pvc-2358e9d1-a1ee-49c0-8dab-57be5f72d3ad
	7bed06d411831       21648f71be814       About a minute ago   Running             headlamp                                 0                   9abbb53f4262f       headlamp-7ddfbb94ff-59lmx
	e724582a408a1       fe00dc95515ba       About a minute ago   Exited              controller                               0                   28a9be30ce59c       ingress-nginx-controller-7967645744-4btrf
	1b04ca41722c7       ee6d597e62dc8       About a minute ago   Running             csi-snapshotter                          0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	955f8a4d6be84       642ded511e141       About a minute ago   Running             csi-provisioner                          0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	a3efd34b39f49       922312104da8a       About a minute ago   Running             liveness-probe                           0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	2b70b5b79de92       08f6b2990811a       About a minute ago   Running             hostpath                                 0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	1b61eda6a526a       0107d56dbc0be       About a minute ago   Running             node-driver-registrar                    0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	519c86bb0f78b       2a5f29343eb03       About a minute ago   Running             gcp-auth                                 0                   c148a5cb55766       gcp-auth-d4c87556c-n5vd4
	970467a7928ea       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   2145126ac5ccb       local-path-provisioner-78b46b4d5c-h6679
	f6c74fccf9510       20e3f2db01e81       About a minute ago   Running             yakd                                     0                   4d64c95b0fcbc       yakd-dashboard-9947fc6bf-mv4gt
	ef7b5974b2820       9a80d518f102c       About a minute ago   Running             csi-attacher                             0                   6f0a337c500b5       csi-hostpath-attacher-0
	c8cb76b91902c       487fa743e1e22       About a minute ago   Running             csi-resizer                              0                   827419cdf4c3c       csi-hostpath-resizer-0
	a58073f9256ac       f8c5dfd0ede5f       About a minute ago   Exited              patch                                    2                   67fe6bb56b812       ingress-nginx-admission-patch-5rc9q
	360779012a3e0       f8c5dfd0ede5f       About a minute ago   Exited              create                                   0                   a13cb4d754b97       ingress-nginx-admission-create-brrq9
	b35a6c596cfa6       1461903ec4fe9       About a minute ago   Running             csi-external-health-monitor-controller   0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	a8b9ad96a4cf3       97e04611ad434       About a minute ago   Running             coredns                                  0                   8145a79f8e3c5       coredns-5dd5756b68-frpgv
	7aa64a9ad4c3a       ba04bb24b9575       2 minutes ago        Running             storage-provisioner                      0                   fe384abf830ea       storage-provisioner
	fa504d1b8fe72       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                               0                   4355787625a52       kube-proxy-wqqx2
	244bda7cba554       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni                              0                   5733ea23009bc       kindnet-rthjj
	400b3f8f47c62       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver                           0                   5e70764e3f506       kube-apiserver-addons-107916
	b80dce1302efb       05c284c929889       2 minutes ago        Running             kube-scheduler                           0                   eee3d9018507d       kube-scheduler-addons-107916
	f80836cb769d3       9961cbceaf234       2 minutes ago        Running             kube-controller-manager                  0                   972feb1225acd       kube-controller-manager-addons-107916
	873197f66b7ad       9cdd6470f48c8       2 minutes ago        Running             etcd                                     0                   c7ffa69b73291       etcd-addons-107916
	
	
	==> containerd <==
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.831950850Z" level=error msg="ContainerStatus for \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\": not found"
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.617396780Z" level=info msg="CreateContainer within sandbox \"9eef6a590e24f2b17c9d029d59c45862774a3c1c8f656b268cd4085640901419\" for container &ContainerMetadata{Name:hello-world-app,Attempt:2,}"
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.647467069Z" level=info msg="CreateContainer within sandbox \"9eef6a590e24f2b17c9d029d59c45862774a3c1c8f656b268cd4085640901419\" for &ContainerMetadata{Name:hello-world-app,Attempt:2,} returns container id \"3b2cdb1a2ee3c26c7f3297bbe0e4b65850cbf4fa5fba6512d177e9b99fbb3be3\""
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.650626665Z" level=info msg="StartContainer for \"3b2cdb1a2ee3c26c7f3297bbe0e4b65850cbf4fa5fba6512d177e9b99fbb3be3\""
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.726314497Z" level=info msg="StartContainer for \"3b2cdb1a2ee3c26c7f3297bbe0e4b65850cbf4fa5fba6512d177e9b99fbb3be3\" returns successfully"
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.756235063Z" level=info msg="shim disconnected" id=3b2cdb1a2ee3c26c7f3297bbe0e4b65850cbf4fa5fba6512d177e9b99fbb3be3
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.756294672Z" level=warning msg="cleaning up after shim disconnected" id=3b2cdb1a2ee3c26c7f3297bbe0e4b65850cbf4fa5fba6512d177e9b99fbb3be3 namespace=k8s.io
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.756305601Z" level=info msg="cleaning up dead shim"
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.764352238Z" level=warning msg="cleanup warnings time=\"2024-02-14T02:58:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10517 runtime=io.containerd.runc.v2\n"
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.822535165Z" level=info msg="RemoveContainer for \"faf8c51efff8f7b2b0ea8ee6d04dc9b4f667062dba51e412a107873db892cf63\""
	Feb 14 02:58:24 addons-107916 containerd[739]: time="2024-02-14T02:58:24.837590576Z" level=info msg="RemoveContainer for \"faf8c51efff8f7b2b0ea8ee6d04dc9b4f667062dba51e412a107873db892cf63\" returns successfully"
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.610673758Z" level=info msg="Kill container \"e724582a408a125eba7218650a6cd2025803dc85d11eb904cfa8505398668dda\""
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.716751304Z" level=info msg="shim disconnected" id=e724582a408a125eba7218650a6cd2025803dc85d11eb904cfa8505398668dda
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.716818142Z" level=warning msg="cleaning up after shim disconnected" id=e724582a408a125eba7218650a6cd2025803dc85d11eb904cfa8505398668dda namespace=k8s.io
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.716828669Z" level=info msg="cleaning up dead shim"
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.731033009Z" level=warning msg="cleanup warnings time=\"2024-02-14T02:58:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10624 runtime=io.containerd.runc.v2\n"
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.734244903Z" level=info msg="StopContainer for \"e724582a408a125eba7218650a6cd2025803dc85d11eb904cfa8505398668dda\" returns successfully"
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.734842967Z" level=info msg="StopPodSandbox for \"28a9be30ce59cbfb7b242e9d4abef0006c86245147b63ae5520d0eed1db48767\""
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.734922579Z" level=info msg="Container to stop \"e724582a408a125eba7218650a6cd2025803dc85d11eb904cfa8505398668dda\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.803090431Z" level=info msg="shim disconnected" id=28a9be30ce59cbfb7b242e9d4abef0006c86245147b63ae5520d0eed1db48767
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.803786666Z" level=warning msg="cleaning up after shim disconnected" id=28a9be30ce59cbfb7b242e9d4abef0006c86245147b63ae5520d0eed1db48767 namespace=k8s.io
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.803913982Z" level=info msg="cleaning up dead shim"
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.823954540Z" level=warning msg="cleanup warnings time=\"2024-02-14T02:58:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10659 runtime=io.containerd.runc.v2\n"
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.908118421Z" level=info msg="TearDown network for sandbox \"28a9be30ce59cbfb7b242e9d4abef0006c86245147b63ae5520d0eed1db48767\" successfully"
	Feb 14 02:58:25 addons-107916 containerd[739]: time="2024-02-14T02:58:25.908190182Z" level=info msg="StopPodSandbox for \"28a9be30ce59cbfb7b242e9d4abef0006c86245147b63ae5520d0eed1db48767\" returns successfully"
	
	
	==> coredns [a8b9ad96a4cf381ad63937cb9cd00b9b8fb38ee1eba33858825e01aed6d326a2] <==
	[INFO] 10.244.0.20:48714 - 61020 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084338s
	[INFO] 10.244.0.20:48714 - 23426 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045308s
	[INFO] 10.244.0.20:48714 - 39862 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069422s
	[INFO] 10.244.0.20:48714 - 18198 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000102741s
	[INFO] 10.244.0.20:48714 - 22409 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001469975s
	[INFO] 10.244.0.20:48714 - 14926 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00119155s
	[INFO] 10.244.0.20:48714 - 50223 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081639s
	[INFO] 10.244.0.20:42653 - 48853 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105499s
	[INFO] 10.244.0.20:42653 - 18553 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059436s
	[INFO] 10.244.0.20:35425 - 9339 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000852s
	[INFO] 10.244.0.20:35425 - 16205 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066181s
	[INFO] 10.244.0.20:42653 - 9862 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000117232s
	[INFO] 10.244.0.20:42653 - 19290 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079661s
	[INFO] 10.244.0.20:35425 - 11610 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044052s
	[INFO] 10.244.0.20:35425 - 65115 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000300759s
	[INFO] 10.244.0.20:42653 - 10670 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042083s
	[INFO] 10.244.0.20:35425 - 25109 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000141854s
	[INFO] 10.244.0.20:35425 - 53028 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044585s
	[INFO] 10.244.0.20:42653 - 5577 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034362s
	[INFO] 10.244.0.20:35425 - 16456 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001297065s
	[INFO] 10.244.0.20:35425 - 33151 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001062568s
	[INFO] 10.244.0.20:35425 - 50309 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074255s
	[INFO] 10.244.0.20:42653 - 63917 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004257675s
	[INFO] 10.244.0.20:42653 - 22426 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001077281s
	[INFO] 10.244.0.20:42653 - 35198 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067149s
	
	
	==> describe nodes <==
	Name:               addons-107916
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-107916
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40f210e92693e4612e04be0697de06db21ac5cf0
	                    minikube.k8s.io/name=addons-107916
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T02_55_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-107916
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-107916"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 02:55:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-107916
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 02:58:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 02:58:12 +0000   Wed, 14 Feb 2024 02:55:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 02:58:12 +0000   Wed, 14 Feb 2024 02:55:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 02:58:12 +0000   Wed, 14 Feb 2024 02:55:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 02:58:12 +0000   Wed, 14 Feb 2024 02:55:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-107916
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 244d646cad4e4280924e3223b942fed4
	  System UUID:                281434a1-6832-43d6-8627-858f8134a6ff
	  Boot ID:                    b6f8a130-5377-4a84-9795-3edbfc6d2fc5
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-vrjll           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  gcp-auth                    gcp-auth-d4c87556c-n5vd4                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  headlamp                    headlamp-7ddfbb94ff-59lmx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 coredns-5dd5756b68-frpgv                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m39s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 csi-hostpathplugin-5fqvb                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 etcd-addons-107916                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m52s
	  kube-system                 kindnet-rthjj                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m39s
	  kube-system                 kube-apiserver-addons-107916               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-controller-manager-addons-107916      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 kube-proxy-wqqx2                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-scheduler-addons-107916               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  local-path-storage          local-path-provisioner-78b46b4d5c-h6679    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-mv4gt             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m37s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x8 over 3m)  kubelet          Node addons-107916 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x8 over 3m)  kubelet          Node addons-107916 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x7 over 3m)  kubelet          Node addons-107916 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m52s            kubelet          Node addons-107916 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s            kubelet          Node addons-107916 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s            kubelet          Node addons-107916 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m52s            kubelet          Node addons-107916 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m52s            kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m52s            kubelet          Node addons-107916 status is now: NodeReady
	  Normal  Starting                 2m52s            kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m40s            node-controller  Node addons-107916 event: Registered Node addons-107916 in Controller
	
	
	==> dmesg <==
	[  +0.001133] FS-Cache: O-key=[8] '2bd5c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001075] FS-Cache: N-key=[8] '2bd5c90000000000'
	[  +0.002828] FS-Cache: Duplicate cookie detected
	[  +0.000708] FS-Cache: O-cookie c=0000003b [p=00000039 fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=0000000076fc1031
	[  +0.001081] FS-Cache: O-key=[8] '2bd5c90000000000'
	[  +0.000709] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000005e2f857b
	[  +0.001050] FS-Cache: N-key=[8] '2bd5c90000000000'
	[  +2.757072] FS-Cache: Duplicate cookie detected
	[  +0.000789] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000994] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=0000000073828904
	[  +0.001121] FS-Cache: O-key=[8] '2ad5c90000000000'
	[  +0.000813] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001101] FS-Cache: N-key=[8] '2ad5c90000000000'
	[  +0.290556] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000975] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=00000000eab8090b
	[  +0.001047] FS-Cache: O-key=[8] '30d5c90000000000'
	[  +0.000761] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=00000000bc792bf3
	[  +0.001026] FS-Cache: N-key=[8] '30d5c90000000000'
	
	
	==> etcd [873197f66b7ad68ed2fb2cbf1116587a9c2034c96c29937c781284a776a67d44] <==
	{"level":"info","ts":"2024-02-14T02:55:32.472253Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-14T02:55:32.472433Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-14T02:55:32.472458Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-14T02:55:32.472558Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T02:55:32.472568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T02:55:32.47285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-14T02:55:32.472958Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-14T02:55:33.451629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-14T02:55:33.451756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-14T02:55:33.451824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-02-14T02:55:33.451883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-02-14T02:55:33.451927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T02:55:33.45197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-14T02:55:33.45201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T02:55:33.459562Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:55:33.459868Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-107916 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T02:55:33.46004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T02:55:33.461106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-14T02:55:33.461289Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T02:55:33.462211Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T02:55:33.462466Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T02:55:33.463503Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T02:55:33.505947Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:55:33.507576Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:55:33.507609Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [519c86bb0f78be69999f9a4055dcfbcdb019ed71fd8c24bac655163ac496009f] <==
	2024/02/14 02:56:52 GCP Auth Webhook started!
	2024/02/14 02:57:11 Ready to marshal response ...
	2024/02/14 02:57:11 Ready to write response ...
	2024/02/14 02:57:11 Ready to marshal response ...
	2024/02/14 02:57:11 Ready to write response ...
	2024/02/14 02:57:11 Ready to marshal response ...
	2024/02/14 02:57:11 Ready to write response ...
	2024/02/14 02:57:22 Ready to marshal response ...
	2024/02/14 02:57:22 Ready to write response ...
	2024/02/14 02:57:28 Ready to marshal response ...
	2024/02/14 02:57:28 Ready to write response ...
	2024/02/14 02:57:28 Ready to marshal response ...
	2024/02/14 02:57:28 Ready to write response ...
	2024/02/14 02:57:36 Ready to marshal response ...
	2024/02/14 02:57:36 Ready to write response ...
	2024/02/14 02:57:40 Ready to marshal response ...
	2024/02/14 02:57:40 Ready to write response ...
	2024/02/14 02:57:57 Ready to marshal response ...
	2024/02/14 02:57:57 Ready to write response ...
	2024/02/14 02:58:05 Ready to marshal response ...
	2024/02/14 02:58:05 Ready to write response ...
	2024/02/14 02:58:12 Ready to marshal response ...
	2024/02/14 02:58:12 Ready to write response ...
	
	
	==> kernel <==
	 02:58:31 up  5:40,  0 users,  load average: 2.25, 1.60, 1.82
	Linux addons-107916 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [244bda7cba5544ede26d1a161fde871f1f0343ea53826e30e2579932cb6fe3e1] <==
	I0214 02:56:23.552361       1 main.go:227] handling current node
	I0214 02:56:33.567574       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:56:33.567599       1 main.go:227] handling current node
	I0214 02:56:43.576727       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:56:43.576760       1 main.go:227] handling current node
	I0214 02:56:53.580637       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:56:53.580666       1 main.go:227] handling current node
	I0214 02:57:03.593556       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:03.593585       1 main.go:227] handling current node
	I0214 02:57:13.606457       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:13.606486       1 main.go:227] handling current node
	I0214 02:57:23.619513       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:23.619540       1 main.go:227] handling current node
	I0214 02:57:33.624359       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:33.624390       1 main.go:227] handling current node
	I0214 02:57:43.634965       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:43.634990       1 main.go:227] handling current node
	I0214 02:57:53.639167       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:53.639198       1 main.go:227] handling current node
	I0214 02:58:03.652747       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:58:03.652777       1 main.go:227] handling current node
	I0214 02:58:13.664129       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:58:13.664157       1 main.go:227] handling current node
	I0214 02:58:23.669283       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:58:23.669321       1 main.go:227] handling current node
	
	
	==> kube-apiserver [400b3f8f47c624b7f70d161c4f843d2f25f3215c8c38a178fac0938c3bbfa36c] <==
	W0214 02:57:45.958585       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0214 02:57:50.220776       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0214 02:57:57.506525       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0214 02:57:57.751188       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.160.194"}
	I0214 02:57:58.117491       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0214 02:58:05.521665       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.122.53"}
	I0214 02:58:22.864228       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.864291       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.878527       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.878750       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.895894       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.899313       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.998330       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.998372       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.998442       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.998464       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.999335       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.999379       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:23.022103       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:23.022893       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:23.028286       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:23.028325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0214 02:58:23.999246       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0214 02:58:24.028052       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0214 02:58:24.046522       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [f80836cb769d35d70e4058ed2c08a868ec2ec839d44473cf672960a0c82a2102] <==
	E0214 02:58:22.439969       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 02:58:22.572800       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0214 02:58:22.576210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="7.828µs"
	I0214 02:58:22.587069       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0214 02:58:23.074947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="8.435µs"
	E0214 02:58:24.001405       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:24.030834       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:24.048398       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 02:58:24.827192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.385µs"
	W0214 02:58:24.936763       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:24.936797       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 02:58:25.039207       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:25.039243       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 02:58:25.175794       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:25.175827       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 02:58:27.059824       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:27.059867       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 02:58:27.521897       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:27.521938       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 02:58:28.165295       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:28.165332       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 02:58:30.806490       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:30.806530       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 02:58:31.015122       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:31.015156       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [fa504d1b8fe727aba4f266a4b61f44c00699490b9f9ea9a99de52d358a959cbd] <==
	I0214 02:55:53.431034       1 server_others.go:69] "Using iptables proxy"
	I0214 02:55:53.464296       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0214 02:55:53.537963       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 02:55:53.540280       1 server_others.go:152] "Using iptables Proxier"
	I0214 02:55:53.540328       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 02:55:53.540337       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 02:55:53.540368       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 02:55:53.540581       1 server.go:846] "Version info" version="v1.28.4"
	I0214 02:55:53.540596       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 02:55:53.542865       1 config.go:188] "Starting service config controller"
	I0214 02:55:53.542887       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 02:55:53.542908       1 config.go:97] "Starting endpoint slice config controller"
	I0214 02:55:53.542911       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 02:55:53.543832       1 config.go:315] "Starting node config controller"
	I0214 02:55:53.543843       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 02:55:53.643879       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0214 02:55:53.643948       1 shared_informer.go:318] Caches are synced for node config
	I0214 02:55:53.643962       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b80dce1302efbac98cbe18a6f823462f0bba917d6788e1dbe962e8e5c877057f] <==
	W0214 02:55:36.509554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 02:55:36.509635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0214 02:55:36.509789       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 02:55:36.509874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0214 02:55:36.509985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 02:55:36.510058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0214 02:55:36.510161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 02:55:36.510229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0214 02:55:36.510337       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 02:55:36.510387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 02:55:36.510609       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 02:55:36.511532       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 02:55:37.361948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 02:55:37.362040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 02:55:37.410377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0214 02:55:37.410419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0214 02:55:37.442568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0214 02:55:37.442810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0214 02:55:37.496612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 02:55:37.496903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0214 02:55:37.588656       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 02:55:37.588868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0214 02:55:37.629756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 02:55:37.629995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0214 02:55:38.095399       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.616205    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6015e7be-aeae-4d2f-a1ee-3f92e61da1e5" path="/var/lib/kubelet/pods/6015e7be-aeae-4d2f-a1ee-3f92e61da1e5/volumes"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.616692    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="60481df1-66d8-42ed-b156-09cc2f49055d" path="/var/lib/kubelet/pods/60481df1-66d8-42ed-b156-09cc2f49055d/volumes"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.617152    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d75f039e-8f96-4e34-9d6c-2cad4e54eb36" path="/var/lib/kubelet/pods/d75f039e-8f96-4e34-9d6c-2cad4e54eb36/volumes"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.788813    1338 scope.go:117] "RemoveContainer" containerID="732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.799007    1338 scope.go:117] "RemoveContainer" containerID="732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: E0214 02:58:23.799460    1338 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\": not found" containerID="732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.799564    1338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"} err="failed to get container status \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\": rpc error: code = NotFound desc = an error occurred when try to find container \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\": not found"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.799579    1338 scope.go:117] "RemoveContainer" containerID="3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.822074    1338 scope.go:117] "RemoveContainer" containerID="3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: E0214 02:58:23.835283    1338 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\": not found" containerID="3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.835379    1338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"} err="failed to get container status \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\": not found"
	Feb 14 02:58:24 addons-107916 kubelet[1338]: I0214 02:58:24.612962    1338 scope.go:117] "RemoveContainer" containerID="faf8c51efff8f7b2b0ea8ee6d04dc9b4f667062dba51e412a107873db892cf63"
	Feb 14 02:58:24 addons-107916 kubelet[1338]: I0214 02:58:24.808645    1338 scope.go:117] "RemoveContainer" containerID="faf8c51efff8f7b2b0ea8ee6d04dc9b4f667062dba51e412a107873db892cf63"
	Feb 14 02:58:24 addons-107916 kubelet[1338]: I0214 02:58:24.809019    1338 scope.go:117] "RemoveContainer" containerID="3b2cdb1a2ee3c26c7f3297bbe0e4b65850cbf4fa5fba6512d177e9b99fbb3be3"
	Feb 14 02:58:24 addons-107916 kubelet[1338]: E0214 02:58:24.809320    1338 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-vrjll_default(a2adc08d-1c4a-4481-9f88-609698caed6a)\"" pod="default/hello-world-app-5d77478584-vrjll" podUID="a2adc08d-1c4a-4481-9f88-609698caed6a"
	Feb 14 02:58:25 addons-107916 kubelet[1338]: I0214 02:58:25.623740    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="05d7fa06-002b-46bb-bfca-2acdd4c8d6c1" path="/var/lib/kubelet/pods/05d7fa06-002b-46bb-bfca-2acdd4c8d6c1/volumes"
	Feb 14 02:58:25 addons-107916 kubelet[1338]: I0214 02:58:25.624686    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a3c8748f-3bbf-450a-8ab8-f682dc3540b3" path="/var/lib/kubelet/pods/a3c8748f-3bbf-450a-8ab8-f682dc3540b3/volumes"
	Feb 14 02:58:25 addons-107916 kubelet[1338]: I0214 02:58:25.848417    1338 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a9be30ce59cbfb7b242e9d4abef0006c86245147b63ae5520d0eed1db48767"
	Feb 14 02:58:25 addons-107916 kubelet[1338]: I0214 02:58:25.994957    1338 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0550a34f-6cd0-492a-a2d8-a4191d0c816b-webhook-cert\") pod \"0550a34f-6cd0-492a-a2d8-a4191d0c816b\" (UID: \"0550a34f-6cd0-492a-a2d8-a4191d0c816b\") "
	Feb 14 02:58:25 addons-107916 kubelet[1338]: I0214 02:58:25.995017    1338 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrnvz\" (UniqueName: \"kubernetes.io/projected/0550a34f-6cd0-492a-a2d8-a4191d0c816b-kube-api-access-vrnvz\") pod \"0550a34f-6cd0-492a-a2d8-a4191d0c816b\" (UID: \"0550a34f-6cd0-492a-a2d8-a4191d0c816b\") "
	Feb 14 02:58:25 addons-107916 kubelet[1338]: I0214 02:58:25.997207    1338 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0550a34f-6cd0-492a-a2d8-a4191d0c816b-kube-api-access-vrnvz" (OuterVolumeSpecName: "kube-api-access-vrnvz") pod "0550a34f-6cd0-492a-a2d8-a4191d0c816b" (UID: "0550a34f-6cd0-492a-a2d8-a4191d0c816b"). InnerVolumeSpecName "kube-api-access-vrnvz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 14 02:58:25 addons-107916 kubelet[1338]: I0214 02:58:25.997527    1338 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0550a34f-6cd0-492a-a2d8-a4191d0c816b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0550a34f-6cd0-492a-a2d8-a4191d0c816b" (UID: "0550a34f-6cd0-492a-a2d8-a4191d0c816b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 02:58:26 addons-107916 kubelet[1338]: I0214 02:58:26.095468    1338 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0550a34f-6cd0-492a-a2d8-a4191d0c816b-webhook-cert\") on node \"addons-107916\" DevicePath \"\""
	Feb 14 02:58:26 addons-107916 kubelet[1338]: I0214 02:58:26.095531    1338 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vrnvz\" (UniqueName: \"kubernetes.io/projected/0550a34f-6cd0-492a-a2d8-a4191d0c816b-kube-api-access-vrnvz\") on node \"addons-107916\" DevicePath \"\""
	Feb 14 02:58:27 addons-107916 kubelet[1338]: I0214 02:58:27.615357    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0550a34f-6cd0-492a-a2d8-a4191d0c816b" path="/var/lib/kubelet/pods/0550a34f-6cd0-492a-a2d8-a4191d0c816b/volumes"
	
	
	==> storage-provisioner [7aa64a9ad4c3a86542d0495ced8fe5b123bc91517f08aeda2ce997fdcb9b6f54] <==
	I0214 02:55:59.233959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 02:55:59.256510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 02:55:59.256564       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 02:55:59.266307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 02:55:59.270146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-107916_763b5459-63ae-4459-9bdb-9e0466a6ab53!
	I0214 02:55:59.273139       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15f0e442-57af-411b-b639-9f6ff974b2a2", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-107916_763b5459-63ae-4459-9bdb-9e0466a6ab53 became leader
	I0214 02:55:59.371754       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-107916_763b5459-63ae-4459-9bdb-9e0466a6ab53!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-107916 -n addons-107916
helpers_test.go:261: (dbg) Run:  kubectl --context addons-107916 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (35.06s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 7.917542ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-107916 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-107916 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f4a014e4-d1ad-498c-95fc-b51c1dae98aa] Pending
helpers_test.go:344: "task-pv-pod" [f4a014e4-d1ad-498c-95fc-b51c1dae98aa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f4a014e4-d1ad-498c-95fc-b51c1dae98aa] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.005360322s
addons_test.go:584: (dbg) Run:  kubectl --context addons-107916 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-107916 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-107916 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-107916 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-107916 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-107916 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-107916 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [705dae58-13c6-4af5-a44e-992e6c34f91b] Pending
helpers_test.go:344: "task-pv-pod-restore" [705dae58-13c6-4af5-a44e-992e6c34f91b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [705dae58-13c6-4af5-a44e-992e6c34f91b] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003608992s
addons_test.go:626: (dbg) Run:  kubectl --context addons-107916 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-107916 delete pod task-pv-pod-restore: (1.266044368s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-107916 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-107916 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-107916 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (819.770785ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 02:58:21.349444 1147422 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:58:21.354011 1147422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:58:21.354035 1147422 out.go:304] Setting ErrFile to fd 2...
	I0214 02:58:21.354045 1147422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:58:21.354331 1147422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 02:58:21.354655 1147422 mustload.go:65] Loading cluster: addons-107916
	I0214 02:58:21.355066 1147422 config.go:182] Loaded profile config "addons-107916": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 02:58:21.355087 1147422 addons.go:597] checking whether the cluster is paused
	I0214 02:58:21.355198 1147422 config.go:182] Loaded profile config "addons-107916": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 02:58:21.355218 1147422 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:58:21.355892 1147422 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:58:21.385063 1147422 ssh_runner.go:195] Run: systemctl --version
	I0214 02:58:21.385125 1147422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:58:21.425755 1147422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:58:21.516566 1147422 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0214 02:58:21.516690 1147422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 02:58:21.598493 1147422 cri.go:89] found id: "9c77c164d07ea9c7f626bc93b813344276940e44c048bc6e5b0217f875e4c3aa"
	I0214 02:58:21.598517 1147422 cri.go:89] found id: "1b04ca41722c7c337972c8080193b80853fcf84c186807e0a2f7f9957f965c61"
	I0214 02:58:21.598524 1147422 cri.go:89] found id: "955f8a4d6be84b1b0ade10e6bec0bd0664a588b379153d775db2f3fdfe4d6060"
	I0214 02:58:21.598528 1147422 cri.go:89] found id: "a3efd34b39f498939381fb125e33dd5237ab5fa4b7ac8868de579af999c8449d"
	I0214 02:58:21.598533 1147422 cri.go:89] found id: "2b70b5b79de92e56cc11256fcecda2ab69b33f1cad3f5cdc142bae5e8de28f55"
	I0214 02:58:21.598537 1147422 cri.go:89] found id: "1b61eda6a526a0925dbe677d94d6c4a0babe85e445c6e5e34fcede808d96da92"
	I0214 02:58:21.598541 1147422 cri.go:89] found id: "ef7b5974b2820271da61c9499b68e9da32e386f9ba8630f8112a2759dcb35075"
	I0214 02:58:21.598545 1147422 cri.go:89] found id: "c8cb76b91902c04726d3ef4c7097df26b51080dd126614e4896c14c3c3485b57"
	I0214 02:58:21.598550 1147422 cri.go:89] found id: "b35a6c596cfa6cfb69d2b70ce6798029c49d17e341f3f42a05ec76d94aa5f2c6"
	I0214 02:58:21.598556 1147422 cri.go:89] found id: "3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"
	I0214 02:58:21.598560 1147422 cri.go:89] found id: "732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"
	I0214 02:58:21.598565 1147422 cri.go:89] found id: "a8b9ad96a4cf381ad63937cb9cd00b9b8fb38ee1eba33858825e01aed6d326a2"
	I0214 02:58:21.598574 1147422 cri.go:89] found id: "7aa64a9ad4c3a86542d0495ced8fe5b123bc91517f08aeda2ce997fdcb9b6f54"
	I0214 02:58:21.598581 1147422 cri.go:89] found id: "fa504d1b8fe727aba4f266a4b61f44c00699490b9f9ea9a99de52d358a959cbd"
	I0214 02:58:21.598585 1147422 cri.go:89] found id: "244bda7cba5544ede26d1a161fde871f1f0343ea53826e30e2579932cb6fe3e1"
	I0214 02:58:21.598590 1147422 cri.go:89] found id: "400b3f8f47c624b7f70d161c4f843d2f25f3215c8c38a178fac0938c3bbfa36c"
	I0214 02:58:21.598603 1147422 cri.go:89] found id: "b80dce1302efbac98cbe18a6f823462f0bba917d6788e1dbe962e8e5c877057f"
	I0214 02:58:21.598610 1147422 cri.go:89] found id: "f80836cb769d35d70e4058ed2c08a868ec2ec839d44473cf672960a0c82a2102"
	I0214 02:58:21.598614 1147422 cri.go:89] found id: "873197f66b7ad68ed2fb2cbf1116587a9c2034c96c29937c781284a776a67d44"
	I0214 02:58:21.598618 1147422 cri.go:89] found id: ""
	I0214 02:58:21.598672 1147422 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0214 02:58:21.672980 1147422 out.go:177] 
	W0214 02:58:21.675234 1147422 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-14T02:58:21Z" level=error msg="stat /run/containerd/runc/k8s.io/844b63b2476e2bb14301ad4bd9b2514bbc9fac23b7fc84504ba232dfd66404fa: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-14T02:58:21Z" level=error msg="stat /run/containerd/runc/k8s.io/844b63b2476e2bb14301ad4bd9b2514bbc9fac23b7fc84504ba232dfd66404fa: no such file or directory"
	
	W0214 02:58:21.675265 1147422 out.go:239] * 
	* 
	W0214 02:58:22.072937 1147422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 02:58:22.075921 1147422 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:640: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-107916 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-arm64 -p addons-107916 addons disable volumesnapshots --alsologtostderr -v=1: (1.123847543s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-107916
helpers_test.go:235: (dbg) docker inspect addons-107916:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2",
	        "Created": "2024-02-14T02:55:16.420595551Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1136363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T02:55:16.696144744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2/hosts",
	        "LogPath": "/var/lib/docker/containers/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2/b44787e49875f76488e7baf03f5399dde3d5227c7d4e2f9559a5404a24ca89c2-json.log",
	        "Name": "/addons-107916",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-107916:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-107916",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ad15bcfab78f1b42a106505f16e91b05f3ec6d12b5d6ee964cebb0825f950870-init/diff:/var/lib/docker/overlay2/2b57dacbb0185892ad2774651ca7e304a0e7ce49c55385fdb5828fd98438b35e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad15bcfab78f1b42a106505f16e91b05f3ec6d12b5d6ee964cebb0825f950870/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad15bcfab78f1b42a106505f16e91b05f3ec6d12b5d6ee964cebb0825f950870/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad15bcfab78f1b42a106505f16e91b05f3ec6d12b5d6ee964cebb0825f950870/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-107916",
	                "Source": "/var/lib/docker/volumes/addons-107916/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-107916",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-107916",
	                "name.minikube.sigs.k8s.io": "addons-107916",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24617fdc7846c69b15f7b14765c3111c326c83c35829afe4fa68f0759e916cae",
	            "SandboxKey": "/var/run/docker/netns/24617fdc7846",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34032"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34031"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34028"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34030"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34029"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-107916": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b44787e49875",
	                        "addons-107916"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "b93ddc45641f03c8b48df5c33691deb87ba7dfc5305e220447487c34fae09735",
	                    "EndpointID": "667b147d1515ed1fa71c2f6a12447183751d391b179724dba1c54edc004d361c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-107916",
	                        "b44787e49875"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-107916 -n addons-107916
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-107916 logs -n 25: (1.655532061s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-630494                                                                     | download-only-630494   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| delete  | -p download-only-950365                                                                     | download-only-950365   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| delete  | -p download-only-695284                                                                     | download-only-695284   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| start   | --download-only -p                                                                          | download-docker-935155 | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | download-docker-935155                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-935155                                                                   | download-docker-935155 | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-348755   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | binary-mirror-348755                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39189                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-348755                                                                     | binary-mirror-348755   | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | addons-107916                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | addons-107916                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-107916 --wait=true                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | -p addons-107916                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-107916 ip                                                                            | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	| addons  | addons-107916 addons disable                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | -p addons-107916                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-107916 ssh cat                                                                       | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | /opt/local-path-provisioner/pvc-2358e9d1-a1ee-49c0-8dab-57be5f72d3ad_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-107916 addons disable                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | addons-107916                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | addons-107916                                                                               |                        |         |         |                     |                     |
	| addons  | addons-107916 addons                                                                        | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-107916 ssh curl -s                                                                   | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-107916 ip                                                                            | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| addons  | addons-107916 addons disable                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-107916 addons                                                                        | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-107916 addons disable                                                                | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC |                     |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-107916 addons                                                                        | addons-107916          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:54:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:54:52.680592 1135902 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:54:52.681297 1135902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:54:52.681343 1135902 out.go:304] Setting ErrFile to fd 2...
	I0214 02:54:52.681370 1135902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:54:52.681697 1135902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 02:54:52.682231 1135902 out.go:298] Setting JSON to false
	I0214 02:54:52.683134 1135902 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20239,"bootTime":1707859054,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 02:54:52.683248 1135902 start.go:138] virtualization:  
	I0214 02:54:52.685872 1135902 out.go:177] * [addons-107916] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:54:52.688169 1135902 out.go:177]   - MINIKUBE_LOCATION=18166
	I0214 02:54:52.690131 1135902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:54:52.688307 1135902 notify.go:220] Checking for updates...
	I0214 02:54:52.692326 1135902 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 02:54:52.694278 1135902 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 02:54:52.696507 1135902 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 02:54:52.698042 1135902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 02:54:52.699926 1135902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:54:52.719414 1135902 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:54:52.719598 1135902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:54:52.787447 1135902 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:54:52.778072093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:54:52.787574 1135902 docker.go:295] overlay module found
	I0214 02:54:52.790777 1135902 out.go:177] * Using the docker driver based on user configuration
	I0214 02:54:52.792781 1135902 start.go:298] selected driver: docker
	I0214 02:54:52.792797 1135902 start.go:902] validating driver "docker" against <nil>
	I0214 02:54:52.792809 1135902 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 02:54:52.793446 1135902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:54:52.846117 1135902 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:54:52.837653977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:54:52.846300 1135902 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:54:52.846527 1135902 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 02:54:52.848466 1135902 out.go:177] * Using Docker driver with root privileges
	I0214 02:54:52.850364 1135902 cni.go:84] Creating CNI manager for ""
	I0214 02:54:52.850384 1135902 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:54:52.850395 1135902 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 02:54:52.850406 1135902 start_flags.go:321] config:
	{Name:addons-107916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-107916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:54:52.852586 1135902 out.go:177] * Starting control plane node addons-107916 in cluster addons-107916
	I0214 02:54:52.854710 1135902 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0214 02:54:52.856463 1135902 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:54:52.858280 1135902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:54:52.858340 1135902 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0214 02:54:52.858353 1135902 cache.go:56] Caching tarball of preloaded images
	I0214 02:54:52.858381 1135902 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:54:52.858442 1135902 preload.go:174] Found /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0214 02:54:52.858453 1135902 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0214 02:54:52.858816 1135902 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/config.json ...
	I0214 02:54:52.858849 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/config.json: {Name:mk274e10426dd26b4871c717ee700cbff5881a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:54:52.872907 1135902 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:54:52.873020 1135902 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:54:52.873042 1135902 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 02:54:52.873050 1135902 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 02:54:52.873058 1135902 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 02:54:52.873067 1135902 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0214 02:55:08.980979 1135902 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0214 02:55:08.981018 1135902 cache.go:194] Successfully downloaded all kic artifacts
	I0214 02:55:08.981072 1135902 start.go:365] acquiring machines lock for addons-107916: {Name:mk6b22d499aa6f5c49dd6b9052c82033de2a5e67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 02:55:08.981637 1135902 start.go:369] acquired machines lock for "addons-107916" in 543.518µs
	I0214 02:55:08.981684 1135902 start.go:93] Provisioning new machine with config: &{Name:addons-107916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-107916 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0214 02:55:08.981765 1135902 start.go:125] createHost starting for "" (driver="docker")
	I0214 02:55:08.984089 1135902 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0214 02:55:08.984348 1135902 start.go:159] libmachine.API.Create for "addons-107916" (driver="docker")
	I0214 02:55:08.984385 1135902 client.go:168] LocalClient.Create starting
	I0214 02:55:08.984507 1135902 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem
	I0214 02:55:09.455745 1135902 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem
	I0214 02:55:09.780833 1135902 cli_runner.go:164] Run: docker network inspect addons-107916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 02:55:09.795058 1135902 cli_runner.go:211] docker network inspect addons-107916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 02:55:09.795151 1135902 network_create.go:281] running [docker network inspect addons-107916] to gather additional debugging logs...
	I0214 02:55:09.795174 1135902 cli_runner.go:164] Run: docker network inspect addons-107916
	W0214 02:55:09.812598 1135902 cli_runner.go:211] docker network inspect addons-107916 returned with exit code 1
	I0214 02:55:09.812632 1135902 network_create.go:284] error running [docker network inspect addons-107916]: docker network inspect addons-107916: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-107916 not found
	I0214 02:55:09.812645 1135902 network_create.go:286] output of [docker network inspect addons-107916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-107916 not found
	
	** /stderr **
	I0214 02:55:09.812761 1135902 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 02:55:09.827555 1135902 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025b6b00}
	I0214 02:55:09.827596 1135902 network_create.go:124] attempt to create docker network addons-107916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0214 02:55:09.827656 1135902 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-107916 addons-107916
	I0214 02:55:09.891730 1135902 network_create.go:108] docker network addons-107916 192.168.49.0/24 created
	I0214 02:55:09.891764 1135902 kic.go:121] calculated static IP "192.168.49.2" for the "addons-107916" container
	I0214 02:55:09.891837 1135902 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 02:55:09.906374 1135902 cli_runner.go:164] Run: docker volume create addons-107916 --label name.minikube.sigs.k8s.io=addons-107916 --label created_by.minikube.sigs.k8s.io=true
	I0214 02:55:09.922178 1135902 oci.go:103] Successfully created a docker volume addons-107916
	I0214 02:55:09.922265 1135902 cli_runner.go:164] Run: docker run --rm --name addons-107916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-107916 --entrypoint /usr/bin/test -v addons-107916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0214 02:55:12.071570 1135902 cli_runner.go:217] Completed: docker run --rm --name addons-107916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-107916 --entrypoint /usr/bin/test -v addons-107916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (2.149254273s)
	I0214 02:55:12.071606 1135902 oci.go:107] Successfully prepared a docker volume addons-107916
	I0214 02:55:12.071644 1135902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:55:12.071667 1135902 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 02:55:12.071759 1135902 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-107916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 02:55:16.349160 1135902 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-107916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.277352862s)
	I0214 02:55:16.349202 1135902 kic.go:203] duration metric: took 4.277532 seconds to extract preloaded images to volume
	W0214 02:55:16.349352 1135902 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 02:55:16.349485 1135902 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 02:55:16.407151 1135902 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-107916 --name addons-107916 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-107916 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-107916 --network addons-107916 --ip 192.168.49.2 --volume addons-107916:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0214 02:55:16.704599 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Running}}
	I0214 02:55:16.730128 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:16.756066 1135902 cli_runner.go:164] Run: docker exec addons-107916 stat /var/lib/dpkg/alternatives/iptables
	I0214 02:55:16.818054 1135902 oci.go:144] the created container "addons-107916" has a running status.
	I0214 02:55:16.818086 1135902 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa...
	I0214 02:55:17.282809 1135902 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 02:55:17.310707 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:17.330723 1135902 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 02:55:17.330749 1135902 kic_runner.go:114] Args: [docker exec --privileged addons-107916 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 02:55:17.405158 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:17.432937 1135902 machine.go:88] provisioning docker machine ...
	I0214 02:55:17.432970 1135902 ubuntu.go:169] provisioning hostname "addons-107916"
	I0214 02:55:17.433044 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:17.460304 1135902 main.go:141] libmachine: Using SSH client type: native
	I0214 02:55:17.460728 1135902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34032 <nil> <nil>}
	I0214 02:55:17.460746 1135902 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-107916 && echo "addons-107916" | sudo tee /etc/hostname
	I0214 02:55:17.641124 1135902 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-107916
	
	I0214 02:55:17.641205 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:17.663986 1135902 main.go:141] libmachine: Using SSH client type: native
	I0214 02:55:17.664399 1135902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34032 <nil> <nil>}
	I0214 02:55:17.664420 1135902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-107916' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-107916/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-107916' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 02:55:17.803964 1135902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 02:55:17.804032 1135902 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18166-1129740/.minikube CaCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18166-1129740/.minikube}
	I0214 02:55:17.804067 1135902 ubuntu.go:177] setting up certificates
	I0214 02:55:17.804105 1135902 provision.go:83] configureAuth start
	I0214 02:55:17.804229 1135902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-107916
	I0214 02:55:17.821292 1135902 provision.go:138] copyHostCerts
	I0214 02:55:17.821374 1135902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem (1082 bytes)
	I0214 02:55:17.821507 1135902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem (1123 bytes)
	I0214 02:55:17.821567 1135902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem (1675 bytes)
	I0214 02:55:17.821608 1135902 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem org=jenkins.addons-107916 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-107916]
	I0214 02:55:18.013638 1135902 provision.go:172] copyRemoteCerts
	I0214 02:55:18.013724 1135902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 02:55:18.013782 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.031376 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.132752 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 02:55:18.158097 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0214 02:55:18.181976 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 02:55:18.206484 1135902 provision.go:86] duration metric: configureAuth took 402.348621ms
	I0214 02:55:18.206511 1135902 ubuntu.go:193] setting minikube options for container-runtime
	I0214 02:55:18.206705 1135902 config.go:182] Loaded profile config "addons-107916": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 02:55:18.206724 1135902 machine.go:91] provisioned docker machine in 773.76556ms
	I0214 02:55:18.206731 1135902 client.go:171] LocalClient.Create took 9.222338542s
	I0214 02:55:18.206750 1135902 start.go:167] duration metric: libmachine.API.Create for "addons-107916" took 9.222403253s
	I0214 02:55:18.206767 1135902 start.go:300] post-start starting for "addons-107916" (driver="docker")
	I0214 02:55:18.206777 1135902 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 02:55:18.206838 1135902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 02:55:18.206890 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.223581 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.316993 1135902 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 02:55:18.320132 1135902 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 02:55:18.320172 1135902 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 02:55:18.320185 1135902 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 02:55:18.320194 1135902 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 02:55:18.320205 1135902 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/addons for local assets ...
	I0214 02:55:18.320277 1135902 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/files for local assets ...
	I0214 02:55:18.320318 1135902 start.go:303] post-start completed in 113.543978ms
	I0214 02:55:18.320643 1135902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-107916
	I0214 02:55:18.336202 1135902 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/config.json ...
	I0214 02:55:18.336511 1135902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 02:55:18.336567 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.353151 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.444544 1135902 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 02:55:18.448771 1135902 start.go:128] duration metric: createHost completed in 9.466988829s
	I0214 02:55:18.448808 1135902 start.go:83] releasing machines lock for "addons-107916", held for 9.467145117s
	I0214 02:55:18.448880 1135902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-107916
	I0214 02:55:18.466144 1135902 ssh_runner.go:195] Run: cat /version.json
	I0214 02:55:18.466196 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.466226 1135902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 02:55:18.466289 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:18.483661 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.495609 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:18.709217 1135902 ssh_runner.go:195] Run: systemctl --version
	I0214 02:55:18.713855 1135902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 02:55:18.718166 1135902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0214 02:55:18.745184 1135902 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0214 02:55:18.745276 1135902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 02:55:18.773038 1135902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 02:55:18.773067 1135902 start.go:475] detecting cgroup driver to use...
	I0214 02:55:18.773100 1135902 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 02:55:18.773164 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0214 02:55:18.785654 1135902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0214 02:55:18.797226 1135902 docker.go:217] disabling cri-docker service (if available) ...
	I0214 02:55:18.797333 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 02:55:18.811408 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 02:55:18.825877 1135902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 02:55:18.923004 1135902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 02:55:19.015460 1135902 docker.go:233] disabling docker service ...
	I0214 02:55:19.015599 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 02:55:19.036142 1135902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 02:55:19.048070 1135902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 02:55:19.143298 1135902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 02:55:19.244823 1135902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 02:55:19.256366 1135902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 02:55:19.273124 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0214 02:55:19.283357 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0214 02:55:19.293838 1135902 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0214 02:55:19.293937 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0214 02:55:19.304263 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 02:55:19.314248 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0214 02:55:19.324356 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 02:55:19.334030 1135902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 02:55:19.343055 1135902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0214 02:55:19.352583 1135902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 02:55:19.361621 1135902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 02:55:19.370237 1135902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 02:55:19.458134 1135902 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 02:55:19.588913 1135902 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0214 02:55:19.589086 1135902 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0214 02:55:19.592751 1135902 start.go:543] Will wait 60s for crictl version
	I0214 02:55:19.592866 1135902 ssh_runner.go:195] Run: which crictl
	I0214 02:55:19.596216 1135902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 02:55:19.632693 1135902 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0214 02:55:19.632780 1135902 ssh_runner.go:195] Run: containerd --version
	I0214 02:55:19.658820 1135902 ssh_runner.go:195] Run: containerd --version
	I0214 02:55:19.687121 1135902 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0214 02:55:19.688747 1135902 cli_runner.go:164] Run: docker network inspect addons-107916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 02:55:19.708514 1135902 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 02:55:19.712177 1135902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 02:55:19.723189 1135902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:55:19.723274 1135902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 02:55:19.760361 1135902 containerd.go:612] all images are preloaded for containerd runtime.
	I0214 02:55:19.760386 1135902 containerd.go:519] Images already preloaded, skipping extraction
	I0214 02:55:19.760449 1135902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 02:55:19.800968 1135902 containerd.go:612] all images are preloaded for containerd runtime.
	I0214 02:55:19.800991 1135902 cache_images.go:84] Images are preloaded, skipping loading
	I0214 02:55:19.801060 1135902 ssh_runner.go:195] Run: sudo crictl info
	I0214 02:55:19.837882 1135902 cni.go:84] Creating CNI manager for ""
	I0214 02:55:19.837908 1135902 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:55:19.837934 1135902 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 02:55:19.837954 1135902 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-107916 NodeName:addons-107916 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 02:55:19.838089 1135902 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-107916"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 02:55:19.838154 1135902 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-107916 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-107916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 02:55:19.838225 1135902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0214 02:55:19.847194 1135902 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 02:55:19.847335 1135902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 02:55:19.856419 1135902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0214 02:55:19.874573 1135902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 02:55:19.892732 1135902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0214 02:55:19.910887 1135902 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 02:55:19.914323 1135902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 02:55:19.924967 1135902 certs.go:56] Setting up /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916 for IP: 192.168.49.2
	I0214 02:55:19.925008 1135902 certs.go:190] acquiring lock for shared ca certs: {Name:mk121f32762802a204d98d3cbcae9456442a0756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:19.925136 1135902 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key
	I0214 02:55:20.298274 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt ...
	I0214 02:55:20.298308 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt: {Name:mk9232405af826090594a99131ef96f3d2514d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:20.298896 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key ...
	I0214 02:55:20.298913 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key: {Name:mk0a6668030acf9159a9780805dccc10fe597a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:20.299344 1135902 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key
	I0214 02:55:20.890828 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt ...
	I0214 02:55:20.890861 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt: {Name:mk7631a211312feb81d4799510095b7fb6aa8261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:20.891063 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key ...
	I0214 02:55:20.891076 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key: {Name:mk5f00b5fdab86a0dd1e9f950d80789e275fdf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:20.891671 1135902 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.key
	I0214 02:55:20.891696 1135902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt with IP's: []
	I0214 02:55:21.329028 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt ...
	I0214 02:55:21.329065 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: {Name:mk4747a3dcba367cabde1c402e50d60b2eb375db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.329884 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.key ...
	I0214 02:55:21.329905 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.key: {Name:mkbf9ed041b362f64d90580e1f3eb25eb63ebf27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.330003 1135902 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key.dd3b5fb2
	I0214 02:55:21.330024 1135902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0214 02:55:21.669274 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt.dd3b5fb2 ...
	I0214 02:55:21.669304 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt.dd3b5fb2: {Name:mk2b44d0c4dc0933fb45841c75231c5d8e6d48cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.670054 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key.dd3b5fb2 ...
	I0214 02:55:21.670074 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key.dd3b5fb2: {Name:mkcbf1578689c03c0cd0903526ade0fca40ace19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.670166 1135902 certs.go:337] copying /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt
	I0214 02:55:21.670255 1135902 certs.go:341] copying /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key
	I0214 02:55:21.670314 1135902 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.key
	I0214 02:55:21.670337 1135902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.crt with IP's: []
	I0214 02:55:21.856925 1135902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.crt ...
	I0214 02:55:21.856955 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.crt: {Name:mk88b1eababa70355b7017dc59995e38bfcf3ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.857143 1135902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.key ...
	I0214 02:55:21.857161 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.key: {Name:mk9439f0e09b9c8af830afe5024cc89054cbd6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:21.857854 1135902 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 02:55:21.857904 1135902 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem (1082 bytes)
	I0214 02:55:21.857930 1135902 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem (1123 bytes)
	I0214 02:55:21.857958 1135902 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem (1675 bytes)
	I0214 02:55:21.858632 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 02:55:21.883313 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 02:55:21.907956 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 02:55:21.932137 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 02:55:21.956222 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 02:55:21.980738 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 02:55:22.008569 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 02:55:22.034568 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 02:55:22.059291 1135902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 02:55:22.084369 1135902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 02:55:22.102623 1135902 ssh_runner.go:195] Run: openssl version
	I0214 02:55:22.108124 1135902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 02:55:22.118099 1135902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:55:22.121809 1135902 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:55 /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:55:22.121894 1135902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:55:22.129182 1135902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 02:55:22.138907 1135902 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 02:55:22.142345 1135902 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0214 02:55:22.142394 1135902 kubeadm.go:404] StartCluster: {Name:addons-107916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-107916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:55:22.142472 1135902 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0214 02:55:22.142543 1135902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 02:55:22.180825 1135902 cri.go:89] found id: ""
	I0214 02:55:22.180909 1135902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 02:55:22.189940 1135902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 02:55:22.198702 1135902 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0214 02:55:22.198789 1135902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 02:55:22.207746 1135902 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 02:55:22.207799 1135902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 02:55:22.259341 1135902 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0214 02:55:22.259750 1135902 kubeadm.go:322] [preflight] Running pre-flight checks
	I0214 02:55:22.300839 1135902 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0214 02:55:22.300953 1135902 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0214 02:55:22.301012 1135902 kubeadm.go:322] OS: Linux
	I0214 02:55:22.301085 1135902 kubeadm.go:322] CGROUPS_CPU: enabled
	I0214 02:55:22.301154 1135902 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0214 02:55:22.301231 1135902 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0214 02:55:22.301299 1135902 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0214 02:55:22.301374 1135902 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0214 02:55:22.301443 1135902 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0214 02:55:22.301524 1135902 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0214 02:55:22.301595 1135902 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0214 02:55:22.301664 1135902 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0214 02:55:22.373581 1135902 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 02:55:22.373736 1135902 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 02:55:22.373856 1135902 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 02:55:22.606025 1135902 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 02:55:22.610703 1135902 out.go:204]   - Generating certificates and keys ...
	I0214 02:55:22.610839 1135902 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0214 02:55:22.610948 1135902 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0214 02:55:22.890283 1135902 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 02:55:23.112828 1135902 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0214 02:55:23.540747 1135902 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0214 02:55:23.909634 1135902 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0214 02:55:24.270156 1135902 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0214 02:55:24.270517 1135902 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-107916 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 02:55:25.374502 1135902 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0214 02:55:25.374907 1135902 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-107916 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 02:55:26.181751 1135902 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 02:55:26.794538 1135902 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 02:55:27.425780 1135902 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0214 02:55:27.425867 1135902 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 02:55:28.301630 1135902 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 02:55:28.843465 1135902 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 02:55:29.237342 1135902 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 02:55:29.534216 1135902 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 02:55:29.534916 1135902 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 02:55:29.537583 1135902 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 02:55:29.539786 1135902 out.go:204]   - Booting up control plane ...
	I0214 02:55:29.539883 1135902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 02:55:29.539960 1135902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 02:55:29.542108 1135902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 02:55:29.555846 1135902 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 02:55:29.556908 1135902 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 02:55:29.557073 1135902 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0214 02:55:29.658910 1135902 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 02:55:38.163298 1135902 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504423 seconds
	I0214 02:55:38.163413 1135902 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 02:55:38.177926 1135902 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 02:55:38.701895 1135902 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 02:55:38.702086 1135902 kubeadm.go:322] [mark-control-plane] Marking the node addons-107916 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 02:55:39.215668 1135902 kubeadm.go:322] [bootstrap-token] Using token: xft5s7.209qx6e1eqh56ont
	I0214 02:55:39.217609 1135902 out.go:204]   - Configuring RBAC rules ...
	I0214 02:55:39.217761 1135902 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 02:55:39.223566 1135902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 02:55:39.233116 1135902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 02:55:39.237255 1135902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 02:55:39.241413 1135902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 02:55:39.245586 1135902 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 02:55:39.262817 1135902 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 02:55:39.505287 1135902 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0214 02:55:39.632080 1135902 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0214 02:55:39.633011 1135902 kubeadm.go:322] 
	I0214 02:55:39.633084 1135902 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0214 02:55:39.633091 1135902 kubeadm.go:322] 
	I0214 02:55:39.633163 1135902 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0214 02:55:39.633168 1135902 kubeadm.go:322] 
	I0214 02:55:39.633192 1135902 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0214 02:55:39.633247 1135902 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 02:55:39.633302 1135902 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 02:55:39.633307 1135902 kubeadm.go:322] 
	I0214 02:55:39.633357 1135902 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0214 02:55:39.633364 1135902 kubeadm.go:322] 
	I0214 02:55:39.633409 1135902 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 02:55:39.633413 1135902 kubeadm.go:322] 
	I0214 02:55:39.633464 1135902 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0214 02:55:39.633542 1135902 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 02:55:39.633606 1135902 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 02:55:39.633613 1135902 kubeadm.go:322] 
	I0214 02:55:39.633691 1135902 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 02:55:39.633762 1135902 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0214 02:55:39.633767 1135902 kubeadm.go:322] 
	I0214 02:55:39.633845 1135902 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xft5s7.209qx6e1eqh56ont \
	I0214 02:55:39.633943 1135902 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d3f320a98a2f1022ee1a4d9bbdd9d3ce0ce634a8fab1d54ded076f0a14b0e04e \
	I0214 02:55:39.633963 1135902 kubeadm.go:322] 	--control-plane 
	I0214 02:55:39.633968 1135902 kubeadm.go:322] 
	I0214 02:55:39.634046 1135902 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0214 02:55:39.634051 1135902 kubeadm.go:322] 
	I0214 02:55:39.634127 1135902 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xft5s7.209qx6e1eqh56ont \
	I0214 02:55:39.634222 1135902 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d3f320a98a2f1022ee1a4d9bbdd9d3ce0ce634a8fab1d54ded076f0a14b0e04e 
	I0214 02:55:39.640063 1135902 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0214 02:55:39.640180 1135902 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 02:55:39.640354 1135902 cni.go:84] Creating CNI manager for ""
	I0214 02:55:39.640383 1135902 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:55:39.643329 1135902 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 02:55:39.645227 1135902 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 02:55:39.658702 1135902 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0214 02:55:39.658720 1135902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 02:55:39.693456 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 02:55:40.696947 1135902 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.003451432s)
	I0214 02:55:40.697004 1135902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 02:55:40.697122 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:40.697230 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=40f210e92693e4612e04be0697de06db21ac5cf0 minikube.k8s.io/name=addons-107916 minikube.k8s.io/updated_at=2024_02_14T02_55_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:40.890126 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:40.890237 1135902 ops.go:34] apiserver oom_adj: -16
	I0214 02:55:41.390461 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:41.890277 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:42.391208 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:42.890848 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:43.390277 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:43.890995 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:44.391082 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:44.890373 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:45.391102 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:45.890916 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:46.390775 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:46.890727 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:47.390278 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:47.890512 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:48.391080 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:48.891160 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:49.390318 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:49.890811 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:50.390280 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:50.890611 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:51.390564 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:51.890255 1135902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:55:51.980427 1135902 kubeadm.go:1088] duration metric: took 11.283354795s to wait for elevateKubeSystemPrivileges.
	I0214 02:55:51.980459 1135902 kubeadm.go:406] StartCluster complete in 29.838068306s
	I0214 02:55:51.980477 1135902 settings.go:142] acquiring lock: {Name:mkcc971fda27c724b3c1908f1b3da87aea10d784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:51.980597 1135902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 02:55:51.980988 1135902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/kubeconfig: {Name:mkc9d4ef83ac02b186254a828f8611428408dff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:55:51.981639 1135902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 02:55:51.981938 1135902 config.go:182] Loaded profile config "addons-107916": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 02:55:51.982100 1135902 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0214 02:55:51.982199 1135902 addons.go:69] Setting yakd=true in profile "addons-107916"
	I0214 02:55:51.982214 1135902 addons.go:234] Setting addon yakd=true in "addons-107916"
	I0214 02:55:51.982249 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.982688 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.983188 1135902 addons.go:69] Setting cloud-spanner=true in profile "addons-107916"
	I0214 02:55:51.983207 1135902 addons.go:234] Setting addon cloud-spanner=true in "addons-107916"
	I0214 02:55:51.983247 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.983663 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.983862 1135902 addons.go:69] Setting metrics-server=true in profile "addons-107916"
	I0214 02:55:51.983880 1135902 addons.go:234] Setting addon metrics-server=true in "addons-107916"
	I0214 02:55:51.983911 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.984307 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.984694 1135902 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-107916"
	I0214 02:55:51.984736 1135902 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-107916"
	I0214 02:55:51.984774 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.985149 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.989762 1135902 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-107916"
	I0214 02:55:51.991034 1135902 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-107916"
	I0214 02:55:51.991120 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:51.991737 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:51.994102 1135902 addons.go:69] Setting default-storageclass=true in profile "addons-107916"
	I0214 02:55:51.994131 1135902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-107916"
	I0214 02:55:51.994460 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.004391 1135902 addons.go:69] Setting registry=true in profile "addons-107916"
	I0214 02:55:52.004835 1135902 addons.go:234] Setting addon registry=true in "addons-107916"
	I0214 02:55:52.005038 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.017003 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.005191 1135902 addons.go:69] Setting storage-provisioner=true in profile "addons-107916"
	I0214 02:55:52.035047 1135902 addons.go:234] Setting addon storage-provisioner=true in "addons-107916"
	I0214 02:55:52.035212 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.035739 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.005212 1135902 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-107916"
	I0214 02:55:52.054516 1135902 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-107916"
	I0214 02:55:52.054948 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.005221 1135902 addons.go:69] Setting volumesnapshots=true in profile "addons-107916"
	I0214 02:55:52.088529 1135902 addons.go:234] Setting addon volumesnapshots=true in "addons-107916"
	I0214 02:55:52.088612 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.089240 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.014818 1135902 addons.go:69] Setting gcp-auth=true in profile "addons-107916"
	I0214 02:55:52.122831 1135902 mustload.go:65] Loading cluster: addons-107916
	I0214 02:55:52.127532 1135902 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0214 02:55:52.129277 1135902 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0214 02:55:52.129296 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0214 02:55:52.129374 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.123546 1135902 config.go:182] Loaded profile config "addons-107916": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 02:55:52.144125 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.014952 1135902 addons.go:69] Setting ingress-dns=true in profile "addons-107916"
	I0214 02:55:52.014957 1135902 addons.go:69] Setting inspektor-gadget=true in profile "addons-107916"
	I0214 02:55:52.014942 1135902 addons.go:69] Setting ingress=true in profile "addons-107916"
	I0214 02:55:52.159720 1135902 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0214 02:55:52.160499 1135902 addons.go:234] Setting addon ingress-dns=true in "addons-107916"
	I0214 02:55:52.167776 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.160521 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0214 02:55:52.171036 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0214 02:55:52.167780 1135902 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0214 02:55:52.168232 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.160538 1135902 addons.go:234] Setting addon inspektor-gadget=true in "addons-107916"
	I0214 02:55:52.160543 1135902 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0214 02:55:52.160530 1135902 addons.go:234] Setting addon ingress=true in "addons-107916"
	I0214 02:55:52.185014 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.185467 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.217014 1135902 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0214 02:55:52.239341 1135902 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 02:55:52.239364 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0214 02:55:52.239434 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.217118 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0214 02:55:52.244444 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.269153 1135902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 02:55:52.218043 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.217959 1135902 addons.go:234] Setting addon default-storageclass=true in "addons-107916"
	I0214 02:55:52.274956 1135902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 02:55:52.274967 1135902 out.go:177]   - Using image docker.io/registry:2.8.3
	I0214 02:55:52.274971 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0214 02:55:52.275418 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.276879 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0214 02:55:52.276897 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0214 02:55:52.276962 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.277301 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.277782 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.293370 1135902 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0214 02:55:52.279383 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 02:55:52.306987 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0214 02:55:52.309093 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0214 02:55:52.316899 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0214 02:55:52.307280 1135902 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0214 02:55:52.307350 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.336829 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0214 02:55:52.324464 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0214 02:55:52.339675 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.341212 1135902 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-107916"
	I0214 02:55:52.341252 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.341728 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:52.394056 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0214 02:55:52.389998 1135902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 02:55:52.391199 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:52.399640 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.411723 1135902 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0214 02:55:52.425713 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0214 02:55:52.425787 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0214 02:55:52.431678 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0214 02:55:52.431703 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0214 02:55:52.431768 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.426880 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.483937 1135902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0214 02:55:52.487674 1135902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0214 02:55:52.490607 1135902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0214 02:55:52.493653 1135902 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 02:55:52.493674 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0214 02:55:52.493737 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.496190 1135902 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0214 02:55:52.490999 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.505731 1135902 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 02:55:52.505751 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0214 02:55:52.505818 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.532284 1135902 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-107916" context rescaled to 1 replicas
	I0214 02:55:52.532323 1135902 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0214 02:55:52.538194 1135902 out.go:177] * Verifying Kubernetes components...
	I0214 02:55:52.540378 1135902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 02:55:52.541876 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.541912 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.543038 1135902 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0214 02:55:52.547302 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0214 02:55:52.547323 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0214 02:55:52.547395 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.571990 1135902 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 02:55:52.572011 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 02:55:52.572074 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.599987 1135902 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0214 02:55:52.602269 1135902 out.go:177]   - Using image docker.io/busybox:stable
	I0214 02:55:52.604775 1135902 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 02:55:52.604799 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0214 02:55:52.604865 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:52.643575 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.646287 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.683487 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.713985 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.723867 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.727585 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.730090 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.733766 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:52.752453 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	W0214 02:55:52.774125 1135902 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0214 02:55:52.774157 1135902 retry.go:31] will retry after 369.891388ms: ssh: handshake failed: EOF
	I0214 02:55:53.271006 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 02:55:53.273753 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 02:55:53.357939 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0214 02:55:53.381564 1135902 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0214 02:55:53.381634 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0214 02:55:53.429385 1135902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0214 02:55:53.429455 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0214 02:55:53.482926 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0214 02:55:53.482998 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0214 02:55:53.497479 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 02:55:53.517799 1135902 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0214 02:55:53.517878 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0214 02:55:53.578106 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 02:55:53.602517 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 02:55:53.628104 1135902 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0214 02:55:53.628176 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0214 02:55:53.640631 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0214 02:55:53.640708 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0214 02:55:53.709364 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0214 02:55:53.709438 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0214 02:55:53.714137 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0214 02:55:53.714211 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0214 02:55:53.758073 1135902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0214 02:55:53.758150 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0214 02:55:53.784104 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 02:55:53.876365 1135902 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0214 02:55:53.876457 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0214 02:55:53.907710 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0214 02:55:53.987670 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0214 02:55:53.987695 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0214 02:55:54.029221 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0214 02:55:54.029288 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0214 02:55:54.046469 1135902 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 02:55:54.046540 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0214 02:55:54.066690 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0214 02:55:54.066763 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0214 02:55:54.119068 1135902 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0214 02:55:54.119146 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0214 02:55:54.187677 1135902 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0214 02:55:54.187752 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0214 02:55:54.199173 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0214 02:55:54.199247 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0214 02:55:54.295839 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0214 02:55:54.295910 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0214 02:55:54.297393 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 02:55:54.342061 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0214 02:55:54.342125 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0214 02:55:54.428029 1135902 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0214 02:55:54.428102 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0214 02:55:54.470281 1135902 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:55:54.470356 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0214 02:55:54.490323 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0214 02:55:54.491109 1135902 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.950687711s)
	I0214 02:55:54.492036 1135902 node_ready.go:35] waiting up to 6m0s for node "addons-107916" to be "Ready" ...
	I0214 02:55:54.495915 1135902 node_ready.go:49] node "addons-107916" has status "Ready":"True"
	I0214 02:55:54.495980 1135902 node_ready.go:38] duration metric: took 3.894354ms waiting for node "addons-107916" to be "Ready" ...
	I0214 02:55:54.496005 1135902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 02:55:54.496601 1135902 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.090794616s)
	I0214 02:55:54.496648 1135902 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0214 02:55:54.505561 1135902 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace to be "Ready" ...
	I0214 02:55:54.577614 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0214 02:55:54.577686 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0214 02:55:54.639962 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:55:54.666587 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0214 02:55:54.666659 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0214 02:55:54.789011 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0214 02:55:54.789083 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0214 02:55:54.937866 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0214 02:55:54.937939 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0214 02:55:55.085340 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.814283827s)
	I0214 02:55:55.126619 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0214 02:55:55.126709 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0214 02:55:55.246750 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0214 02:55:55.246821 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0214 02:55:55.282344 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0214 02:55:55.282425 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0214 02:55:55.307449 1135902 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 02:55:55.307556 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0214 02:55:55.412017 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0214 02:55:55.412089 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0214 02:55:55.664078 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 02:55:55.709152 1135902 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0214 02:55:55.709226 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0214 02:55:55.879534 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0214 02:55:56.518275 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:55:58.546610 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:55:59.225055 1135902 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0214 02:55:59.225136 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:59.245009 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:59.562260 1135902 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0214 02:55:59.703677 1135902 addons.go:234] Setting addon gcp-auth=true in "addons-107916"
	I0214 02:55:59.703754 1135902 host.go:66] Checking if "addons-107916" exists ...
	I0214 02:55:59.704213 1135902 cli_runner.go:164] Run: docker container inspect addons-107916 --format={{.State.Status}}
	I0214 02:55:59.727873 1135902 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0214 02:55:59.727932 1135902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-107916
	I0214 02:55:59.747597 1135902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/addons-107916/id_rsa Username:docker}
	I0214 02:55:59.946833 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.673042275s)
	I0214 02:55:59.946866 1135902 addons.go:470] Verifying addon ingress=true in "addons-107916"
	I0214 02:55:59.949783 1135902 out.go:177] * Verifying ingress addon...
	I0214 02:55:59.947054 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.589046501s)
	I0214 02:55:59.947115 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.449557178s)
	I0214 02:55:59.947148 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.369022704s)
	I0214 02:55:59.947177 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.344642223s)
	I0214 02:55:59.947217 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.163045565s)
	I0214 02:55:59.947311 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.64985113s)
	I0214 02:55:59.947349 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.456962102s)
	I0214 02:55:59.947420 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.307382914s)
	I0214 02:55:59.947430 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.039462724s)
	I0214 02:55:59.952277 1135902 addons.go:470] Verifying addon registry=true in "addons-107916"
	I0214 02:55:59.955117 1135902 out.go:177] * Verifying registry addon...
	I0214 02:55:59.953392 1135902 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0214 02:55:59.953430 1135902 addons.go:470] Verifying addon metrics-server=true in "addons-107916"
	W0214 02:55:59.953611 1135902 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 02:55:59.959274 1135902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0214 02:55:59.961054 1135902 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-107916 service yakd-dashboard -n yakd-dashboard
	
	I0214 02:55:59.961181 1135902 retry.go:31] will retry after 361.504562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 02:55:59.970823 1135902 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0214 02:55:59.970859 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:55:59.984282 1135902 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0214 02:55:59.984309 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0214 02:55:59.984841 1135902 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0214 02:56:00.325597 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:56:00.466814 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:00.468640 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:00.963439 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:00.971531 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:01.023525 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:01.479182 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:01.487008 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:01.697656 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.033371174s)
	I0214 02:56:01.697746 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.818027711s)
	I0214 02:56:01.697692 1135902 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-107916"
	I0214 02:56:01.697905 1135902 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.969904033s)
	I0214 02:56:01.700242 1135902 out.go:177] * Verifying csi-hostpath-driver addon...
	I0214 02:56:01.702303 1135902 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0214 02:56:01.703315 1135902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0214 02:56:01.707863 1135902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0214 02:56:01.710064 1135902 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0214 02:56:01.710093 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0214 02:56:01.716919 1135902 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0214 02:56:01.716950 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:01.778119 1135902 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0214 02:56:01.778190 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0214 02:56:01.830300 1135902 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 02:56:01.830388 1135902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0214 02:56:01.881279 1135902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 02:56:01.963933 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:01.969620 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:02.211864 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:02.464112 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:02.469209 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:02.575146 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.249478297s)
	I0214 02:56:02.711362 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:02.967514 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:02.976360 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:03.050248 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:03.061327 1135902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.179959145s)
	I0214 02:56:03.064152 1135902 addons.go:470] Verifying addon gcp-auth=true in "addons-107916"
	I0214 02:56:03.066372 1135902 out.go:177] * Verifying gcp-auth addon...
	I0214 02:56:03.070340 1135902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0214 02:56:03.080693 1135902 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0214 02:56:03.080720 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:03.212449 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:03.463188 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:03.467335 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:03.574875 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:03.711065 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:03.963187 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:03.966628 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:04.074317 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:04.211633 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:04.468620 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:04.476510 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:04.574757 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:04.711748 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:04.965112 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:04.968873 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:05.075297 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:05.210703 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:05.475389 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:05.477316 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:05.512931 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:05.574638 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:05.712092 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:05.963665 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:05.966460 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:06.075175 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:06.211277 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:06.464419 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:06.467032 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:06.574267 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:06.710889 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:06.969464 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:06.970426 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:07.076907 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:07.211917 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:07.463693 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:07.467165 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:07.513047 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:07.574411 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:07.712164 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:07.965219 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:07.967572 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:08.075067 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:08.211975 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:08.463887 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:08.467745 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:08.574794 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:08.712120 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:08.964018 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:08.969298 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:09.075598 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:09.212453 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:09.464794 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:09.469866 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:09.513608 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:09.574814 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:09.723571 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:09.964946 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:09.968427 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:10.075434 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:10.212019 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:10.463700 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:10.467203 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:10.579827 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:10.711696 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:10.963727 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:10.966068 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:11.074612 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:11.212352 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:11.467740 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:11.470207 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:11.575440 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:11.711123 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:11.969105 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:11.970599 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:12.017942 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:12.074806 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:12.210941 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:12.468556 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:12.469016 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:12.574063 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:12.712483 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:12.963008 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:12.967213 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:13.074677 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:13.211512 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:13.463240 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:13.466768 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:13.574788 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:13.710803 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:13.963776 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:13.965515 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:14.074623 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:14.212141 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:14.462531 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:14.466716 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:14.512466 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:14.574238 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:14.710917 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:14.964653 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:14.965821 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:15.074890 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:15.211384 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:15.463733 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:15.468088 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:15.574659 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:15.711162 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:15.963236 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:15.966352 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:16.074601 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:16.211050 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:16.463768 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:16.467175 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:16.512872 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:16.574742 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:16.711555 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:16.963724 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:16.967539 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:17.074120 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:17.211908 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:17.465784 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:17.466911 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:17.574710 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:17.711470 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:17.962785 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:17.965698 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:18.078143 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:18.211301 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:18.462396 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:18.466702 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:18.574346 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:18.711319 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:18.962725 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:18.965763 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:19.013149 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:19.074339 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:19.211320 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:19.463665 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:19.466734 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:19.574365 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:19.712460 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:19.966550 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:19.967611 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:20.075841 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:20.212454 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:20.463304 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:20.467178 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:20.576374 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:20.712490 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:20.962947 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:20.966036 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:21.074330 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:21.211227 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:21.462655 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:21.466049 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:21.512524 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:21.574041 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:21.711453 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:21.962932 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:21.965673 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:22.074722 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:22.211004 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:22.465491 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:22.466499 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:22.574833 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:22.711443 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:22.963855 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:22.968059 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:23.074105 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:23.211443 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:23.464424 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:23.466625 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:23.512649 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:23.574010 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:23.711887 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:23.963963 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:23.966286 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:24.074532 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:24.211525 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:24.467239 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:24.468193 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:24.575134 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:24.713028 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:24.965216 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:24.967554 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:25.082074 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:25.211722 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:25.463195 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:25.467121 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:25.575322 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:25.714198 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:25.968557 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:25.973101 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:26.013660 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:26.075341 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:26.211618 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:26.463171 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:26.467701 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:26.574606 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:26.711117 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:26.964479 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:26.966544 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:27.074750 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:27.211693 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:27.463994 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:27.466476 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:27.573867 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:27.711663 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:27.962909 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:27.966029 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:28.015839 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:28.077461 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:28.211627 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:28.463906 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:28.467143 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:28.575147 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:28.714141 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:28.965776 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:28.969632 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:29.074191 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:29.220493 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:29.463121 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:29.466030 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:29.574592 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:29.711967 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:29.964513 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:29.969249 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:30.034927 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:30.075603 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:30.217450 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:30.462879 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:30.467528 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:30.574747 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:30.711958 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:30.974600 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:31.001129 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:31.074746 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:31.211466 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:31.464135 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:31.468570 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:31.574671 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:31.711810 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:31.964240 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:31.971575 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:32.075151 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:32.211550 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:32.463905 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:32.469287 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:32.513999 1135902 pod_ready.go:102] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"False"
	I0214 02:56:32.574915 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:32.711454 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:32.965556 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:32.970305 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:33.089025 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:33.211799 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:33.463643 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:33.467822 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:33.513207 1135902 pod_ready.go:92] pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.513233 1135902 pod_ready.go:81] duration metric: took 39.00759531s waiting for pod "coredns-5dd5756b68-frpgv" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.513245 1135902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.519385 1135902 pod_ready.go:92] pod "etcd-addons-107916" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.519409 1135902 pod_ready.go:81] duration metric: took 6.155504ms waiting for pod "etcd-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.519423 1135902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.525907 1135902 pod_ready.go:92] pod "kube-apiserver-addons-107916" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.525934 1135902 pod_ready.go:81] duration metric: took 6.501612ms waiting for pod "kube-apiserver-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.525946 1135902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.532317 1135902 pod_ready.go:92] pod "kube-controller-manager-addons-107916" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.532342 1135902 pod_ready.go:81] duration metric: took 6.388105ms waiting for pod "kube-controller-manager-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.532353 1135902 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wqqx2" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.538857 1135902 pod_ready.go:92] pod "kube-proxy-wqqx2" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.538884 1135902 pod_ready.go:81] duration metric: took 6.52237ms waiting for pod "kube-proxy-wqqx2" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.538896 1135902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.575137 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:33.716219 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:33.910518 1135902 pod_ready.go:92] pod "kube-scheduler-addons-107916" in "kube-system" namespace has status "Ready":"True"
	I0214 02:56:33.910547 1135902 pod_ready.go:81] duration metric: took 371.643092ms waiting for pod "kube-scheduler-addons-107916" in "kube-system" namespace to be "Ready" ...
	I0214 02:56:33.910559 1135902 pod_ready.go:38] duration metric: took 39.414528016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 02:56:33.910633 1135902 api_server.go:52] waiting for apiserver process to appear ...
	I0214 02:56:33.910723 1135902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 02:56:33.926908 1135902 api_server.go:72] duration metric: took 41.394553358s to wait for apiserver process to appear ...
	I0214 02:56:33.926936 1135902 api_server.go:88] waiting for apiserver healthz status ...
	I0214 02:56:33.926957 1135902 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0214 02:56:33.936854 1135902 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0214 02:56:33.938451 1135902 api_server.go:141] control plane version: v1.28.4
	I0214 02:56:33.938480 1135902 api_server.go:131] duration metric: took 11.535734ms to wait for apiserver health ...
	I0214 02:56:33.938489 1135902 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 02:56:33.963864 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:33.968450 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:34.074759 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:34.118074 1135902 system_pods.go:59] 18 kube-system pods found
	I0214 02:56:34.118111 1135902 system_pods.go:61] "coredns-5dd5756b68-frpgv" [725a8f05-de51-4e9b-b8c0-8c1c0c28b9d8] Running
	I0214 02:56:34.118121 1135902 system_pods.go:61] "csi-hostpath-attacher-0" [f819a79e-8675-4723-8c63-c8c1c0564130] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 02:56:34.118131 1135902 system_pods.go:61] "csi-hostpath-resizer-0" [fbb76b19-2a31-4945-96a0-3cecc97d33ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 02:56:34.118141 1135902 system_pods.go:61] "csi-hostpathplugin-5fqvb" [33dff35f-ee07-4587-89ba-846f0bee07db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 02:56:34.118153 1135902 system_pods.go:61] "etcd-addons-107916" [1fe5f000-92d4-47ad-aba2-3ba7e884263e] Running
	I0214 02:56:34.118166 1135902 system_pods.go:61] "kindnet-rthjj" [75af4cf2-01b1-4dca-9bfd-7c24b3dc528e] Running
	I0214 02:56:34.118171 1135902 system_pods.go:61] "kube-apiserver-addons-107916" [36c01377-8ded-4c47-8178-340afadcc26c] Running
	I0214 02:56:34.118179 1135902 system_pods.go:61] "kube-controller-manager-addons-107916" [05719c3d-db88-46b8-bcb5-a11b50f1a47b] Running
	I0214 02:56:34.118187 1135902 system_pods.go:61] "kube-ingress-dns-minikube" [6015e7be-aeae-4d2f-a1ee-3f92e61da1e5] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 02:56:34.118192 1135902 system_pods.go:61] "kube-proxy-wqqx2" [2628f6b4-92c2-45ac-8ef2-cd9a32918e0b] Running
	I0214 02:56:34.118202 1135902 system_pods.go:61] "kube-scheduler-addons-107916" [ea27f96b-bb9c-4fe5-b4dc-41a0dd834064] Running
	I0214 02:56:34.118209 1135902 system_pods.go:61] "metrics-server-69cf46c98-xgpcx" [a75b205a-055e-4b2e-82c2-53e542d18ae2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 02:56:34.118250 1135902 system_pods.go:61] "nvidia-device-plugin-daemonset-qp5mc" [e83ab22d-76cc-418f-9a1e-704888f17ca0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 02:56:34.118265 1135902 system_pods.go:61] "registry-proxy-4vspg" [ed6185ac-833d-49bc-9dbd-44ca26c256ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 02:56:34.118271 1135902 system_pods.go:61] "registry-vq7pw" [76ecca74-b904-428a-957c-e497f46f916d] Running
	I0214 02:56:34.118282 1135902 system_pods.go:61] "snapshot-controller-58dbcc7b99-6tw9t" [a3c8748f-3bbf-450a-8ab8-f682dc3540b3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 02:56:34.118290 1135902 system_pods.go:61] "snapshot-controller-58dbcc7b99-mxxxv" [05d7fa06-002b-46bb-bfca-2acdd4c8d6c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 02:56:34.118299 1135902 system_pods.go:61] "storage-provisioner" [2c9e38cc-5e48-4667-a0a7-9ac74e980de2] Running
	I0214 02:56:34.118306 1135902 system_pods.go:74] duration metric: took 179.811425ms to wait for pod list to return data ...
	I0214 02:56:34.118318 1135902 default_sa.go:34] waiting for default service account to be created ...
	I0214 02:56:34.212384 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:34.311229 1135902 default_sa.go:45] found service account: "default"
	I0214 02:56:34.311258 1135902 default_sa.go:55] duration metric: took 192.932347ms for default service account to be created ...
	I0214 02:56:34.311277 1135902 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 02:56:34.465096 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:34.467847 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:34.518648 1135902 system_pods.go:86] 18 kube-system pods found
	I0214 02:56:34.518680 1135902 system_pods.go:89] "coredns-5dd5756b68-frpgv" [725a8f05-de51-4e9b-b8c0-8c1c0c28b9d8] Running
	I0214 02:56:34.518690 1135902 system_pods.go:89] "csi-hostpath-attacher-0" [f819a79e-8675-4723-8c63-c8c1c0564130] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 02:56:34.518699 1135902 system_pods.go:89] "csi-hostpath-resizer-0" [fbb76b19-2a31-4945-96a0-3cecc97d33ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 02:56:34.518710 1135902 system_pods.go:89] "csi-hostpathplugin-5fqvb" [33dff35f-ee07-4587-89ba-846f0bee07db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 02:56:34.518717 1135902 system_pods.go:89] "etcd-addons-107916" [1fe5f000-92d4-47ad-aba2-3ba7e884263e] Running
	I0214 02:56:34.518727 1135902 system_pods.go:89] "kindnet-rthjj" [75af4cf2-01b1-4dca-9bfd-7c24b3dc528e] Running
	I0214 02:56:34.518733 1135902 system_pods.go:89] "kube-apiserver-addons-107916" [36c01377-8ded-4c47-8178-340afadcc26c] Running
	I0214 02:56:34.518742 1135902 system_pods.go:89] "kube-controller-manager-addons-107916" [05719c3d-db88-46b8-bcb5-a11b50f1a47b] Running
	I0214 02:56:34.518750 1135902 system_pods.go:89] "kube-ingress-dns-minikube" [6015e7be-aeae-4d2f-a1ee-3f92e61da1e5] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 02:56:34.518756 1135902 system_pods.go:89] "kube-proxy-wqqx2" [2628f6b4-92c2-45ac-8ef2-cd9a32918e0b] Running
	I0214 02:56:34.518764 1135902 system_pods.go:89] "kube-scheduler-addons-107916" [ea27f96b-bb9c-4fe5-b4dc-41a0dd834064] Running
	I0214 02:56:34.518771 1135902 system_pods.go:89] "metrics-server-69cf46c98-xgpcx" [a75b205a-055e-4b2e-82c2-53e542d18ae2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 02:56:34.518782 1135902 system_pods.go:89] "nvidia-device-plugin-daemonset-qp5mc" [e83ab22d-76cc-418f-9a1e-704888f17ca0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 02:56:34.518789 1135902 system_pods.go:89] "registry-proxy-4vspg" [ed6185ac-833d-49bc-9dbd-44ca26c256ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 02:56:34.518796 1135902 system_pods.go:89] "registry-vq7pw" [76ecca74-b904-428a-957c-e497f46f916d] Running
	I0214 02:56:34.518803 1135902 system_pods.go:89] "snapshot-controller-58dbcc7b99-6tw9t" [a3c8748f-3bbf-450a-8ab8-f682dc3540b3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 02:56:34.518811 1135902 system_pods.go:89] "snapshot-controller-58dbcc7b99-mxxxv" [05d7fa06-002b-46bb-bfca-2acdd4c8d6c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 02:56:34.518818 1135902 system_pods.go:89] "storage-provisioner" [2c9e38cc-5e48-4667-a0a7-9ac74e980de2] Running
	I0214 02:56:34.518826 1135902 system_pods.go:126] duration metric: took 207.5432ms to wait for k8s-apps to be running ...
	I0214 02:56:34.518838 1135902 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 02:56:34.518898 1135902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 02:56:34.534371 1135902 system_svc.go:56] duration metric: took 15.522033ms WaitForService to wait for kubelet.
	I0214 02:56:34.534404 1135902 kubeadm.go:581] duration metric: took 42.002049987s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 02:56:34.534446 1135902 node_conditions.go:102] verifying NodePressure condition ...
	I0214 02:56:34.573739 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:34.713804 1135902 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 02:56:34.713845 1135902 node_conditions.go:123] node cpu capacity is 2
	I0214 02:56:34.713859 1135902 node_conditions.go:105] duration metric: took 179.403403ms to run NodePressure ...
	I0214 02:56:34.713871 1135902 start.go:228] waiting for startup goroutines ...
	I0214 02:56:34.715360 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:34.963963 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:34.966197 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:35.075715 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:35.211898 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:35.463303 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:35.466302 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:35.573929 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:35.711879 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:35.964996 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:35.966118 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:56:36.074398 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:36.212398 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:36.465560 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:36.467207 1135902 kapi.go:107] duration metric: took 36.507928754s to wait for kubernetes.io/minikube-addons=registry ...
	I0214 02:56:36.574686 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:36.711507 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:36.963141 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:37.075628 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:37.212307 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:37.463151 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:37.575166 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:37.712211 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:37.963529 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:38.075133 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:38.213023 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:38.464305 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:38.574829 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:38.711440 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:38.962956 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:39.081028 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:39.218968 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:39.464684 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:39.581641 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:39.711520 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:39.963217 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:40.076108 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:40.211929 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:40.463368 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:40.574458 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:40.719834 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:40.967016 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:41.074781 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:41.211201 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:41.464163 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:41.574839 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:41.712230 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:41.964017 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:42.075693 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:42.213423 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:42.463349 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:42.574527 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:42.712675 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:42.963568 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:43.074302 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:43.211719 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:43.463051 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:43.574951 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:43.713806 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:43.963209 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:44.074808 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:44.212950 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:44.464019 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:44.575081 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:44.711252 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:44.963193 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:45.085170 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:45.225919 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:45.462891 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:45.574838 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:45.712162 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:45.963527 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:46.074707 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:46.213851 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:46.463560 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:46.574676 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:46.717508 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:46.963643 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:47.082366 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:47.211109 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:47.464115 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:47.575202 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:47.715260 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:47.963534 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:48.074903 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:48.211309 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:48.463736 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:48.574370 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:48.711381 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:48.963519 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:49.082174 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:49.212046 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:49.463631 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:49.574376 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:49.719827 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:49.963800 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:50.085989 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:50.213191 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:50.464265 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:50.574917 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:50.712277 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:50.965760 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:51.074635 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:51.214381 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:51.465277 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:51.574148 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:51.711258 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:51.963194 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:52.074340 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:52.211751 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:52.463308 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:52.574217 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:52.712321 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:52.962706 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:53.074261 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:56:53.210449 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:53.464591 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:53.574015 1135902 kapi.go:107] duration metric: took 50.503674897s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0214 02:56:53.579059 1135902 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-107916 cluster.
	I0214 02:56:53.581165 1135902 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0214 02:56:53.583105 1135902 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0214 02:56:53.713600 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:53.963124 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:54.211650 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:54.462937 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:54.711280 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:54.962680 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:55.213776 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:55.463416 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:55.711676 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:55.963681 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:56.213205 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:56.462806 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:56.711346 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:56.963362 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:57.210681 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:57.463106 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:57.713730 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:57.963440 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:58.211333 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:58.477589 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:58.712168 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:58.966022 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:59.211773 1135902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:56:59.462825 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:56:59.711553 1135902 kapi.go:107] duration metric: took 58.008239466s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0214 02:56:59.963678 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:00.465089 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:00.962524 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:01.463377 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:01.963205 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:02.463268 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:02.963749 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:03.463176 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:03.962891 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:04.463644 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:04.963638 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:05.462626 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:05.969143 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:06.463570 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:06.963613 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:07.465834 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:07.963195 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:08.463237 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:08.962760 1135902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:57:09.465758 1135902 kapi.go:107] duration metric: took 1m9.512364179s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0214 02:57:09.467817 1135902 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0214 02:57:09.469746 1135902 addons.go:505] enable addons completed in 1m17.487639726s: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0214 02:57:09.469797 1135902 start.go:233] waiting for cluster config update ...
	I0214 02:57:09.469832 1135902 start.go:242] writing updated cluster config ...
	I0214 02:57:09.470179 1135902 ssh_runner.go:195] Run: rm -f paused
	I0214 02:57:09.824976 1135902 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0214 02:57:09.827770 1135902 out.go:177] * Done! kubectl is now configured to use "addons-107916" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	faf8c51efff8f       dd1b12fcb6097       15 seconds ago       Exited              hello-world-app                          1                   9eef6a590e24f       hello-world-app-5d77478584-vrjll
	1858190050b4d       d315ef79be32c       25 seconds ago       Running             nginx                                    0                   0e50a2d1a7dd8       nginx
	36f8987acedf5       fc9db2894f4e4       47 seconds ago       Exited              helper-pod                               0                   9aa54732e524e       helper-pod-delete-pvc-2358e9d1-a1ee-49c0-8dab-57be5f72d3ad
	7bed06d411831       21648f71be814       About a minute ago   Running             headlamp                                 0                   9abbb53f4262f       headlamp-7ddfbb94ff-59lmx
	e724582a408a1       fe00dc95515ba       About a minute ago   Running             controller                               0                   28a9be30ce59c       ingress-nginx-controller-7967645744-4btrf
	1b04ca41722c7       ee6d597e62dc8       About a minute ago   Running             csi-snapshotter                          0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	955f8a4d6be84       642ded511e141       About a minute ago   Running             csi-provisioner                          0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	a3efd34b39f49       922312104da8a       About a minute ago   Running             liveness-probe                           0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	2b70b5b79de92       08f6b2990811a       About a minute ago   Running             hostpath                                 0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	1b61eda6a526a       0107d56dbc0be       About a minute ago   Running             node-driver-registrar                    0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	519c86bb0f78b       2a5f29343eb03       About a minute ago   Running             gcp-auth                                 0                   c148a5cb55766       gcp-auth-d4c87556c-n5vd4
	970467a7928ea       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   2145126ac5ccb       local-path-provisioner-78b46b4d5c-h6679
	f6c74fccf9510       20e3f2db01e81       About a minute ago   Running             yakd                                     0                   4d64c95b0fcbc       yakd-dashboard-9947fc6bf-mv4gt
	ef7b5974b2820       9a80d518f102c       About a minute ago   Running             csi-attacher                             0                   6f0a337c500b5       csi-hostpath-attacher-0
	c8cb76b91902c       487fa743e1e22       About a minute ago   Running             csi-resizer                              0                   827419cdf4c3c       csi-hostpath-resizer-0
	a58073f9256ac       f8c5dfd0ede5f       About a minute ago   Exited              patch                                    2                   67fe6bb56b812       ingress-nginx-admission-patch-5rc9q
	360779012a3e0       f8c5dfd0ede5f       About a minute ago   Exited              create                                   0                   a13cb4d754b97       ingress-nginx-admission-create-brrq9
	b35a6c596cfa6       1461903ec4fe9       About a minute ago   Running             csi-external-health-monitor-controller   0                   2eb4caa4f0136       csi-hostpathplugin-5fqvb
	a8b9ad96a4cf3       97e04611ad434       About a minute ago   Running             coredns                                  0                   8145a79f8e3c5       coredns-5dd5756b68-frpgv
	7aa64a9ad4c3a       ba04bb24b9575       2 minutes ago        Running             storage-provisioner                      0                   fe384abf830ea       storage-provisioner
	fa504d1b8fe72       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                               0                   4355787625a52       kube-proxy-wqqx2
	244bda7cba554       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni                              0                   5733ea23009bc       kindnet-rthjj
	400b3f8f47c62       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver                           0                   5e70764e3f506       kube-apiserver-addons-107916
	b80dce1302efb       05c284c929889       2 minutes ago        Running             kube-scheduler                           0                   eee3d9018507d       kube-scheduler-addons-107916
	f80836cb769d3       9961cbceaf234       2 minutes ago        Running             kube-controller-manager                  0                   972feb1225acd       kube-controller-manager-addons-107916
	873197f66b7ad       9cdd6470f48c8       2 minutes ago        Running             etcd                                     0                   c7ffa69b73291       etcd-addons-107916
	
	
	==> containerd <==
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.195201254Z" level=info msg="StopPodSandbox for \"d1849e7071193566a27cde6cdcbe03e15f4931c3d969cc5cb9d23dc3608d1f1b\""
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.195487261Z" level=info msg="Container to stop \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.200647764Z" level=info msg="StopContainer for \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\" returns successfully"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.201357004Z" level=info msg="StopPodSandbox for \"99fe5692040ced0c1878d0a1070124e25963e2bf7230ce142eef720033053268\""
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.201527798Z" level=info msg="Container to stop \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.261084276Z" level=info msg="shim disconnected" id=d1849e7071193566a27cde6cdcbe03e15f4931c3d969cc5cb9d23dc3608d1f1b
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.262072295Z" level=warning msg="cleaning up after shim disconnected" id=d1849e7071193566a27cde6cdcbe03e15f4931c3d969cc5cb9d23dc3608d1f1b namespace=k8s.io
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.262207184Z" level=info msg="cleaning up dead shim"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.275401270Z" level=info msg="shim disconnected" id=99fe5692040ced0c1878d0a1070124e25963e2bf7230ce142eef720033053268
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.277095124Z" level=warning msg="cleaning up after shim disconnected" id=99fe5692040ced0c1878d0a1070124e25963e2bf7230ce142eef720033053268 namespace=k8s.io
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.277199761Z" level=info msg="cleaning up dead shim"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.284561182Z" level=warning msg="cleanup warnings time=\"2024-02-14T02:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10280 runtime=io.containerd.runc.v2\n"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.297326766Z" level=warning msg="cleanup warnings time=\"2024-02-14T02:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10289 runtime=io.containerd.runc.v2\n"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.336556147Z" level=info msg="TearDown network for sandbox \"d1849e7071193566a27cde6cdcbe03e15f4931c3d969cc5cb9d23dc3608d1f1b\" successfully"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.336736507Z" level=info msg="StopPodSandbox for \"d1849e7071193566a27cde6cdcbe03e15f4931c3d969cc5cb9d23dc3608d1f1b\" returns successfully"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.364555435Z" level=info msg="TearDown network for sandbox \"99fe5692040ced0c1878d0a1070124e25963e2bf7230ce142eef720033053268\" successfully"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.364754010Z" level=info msg="StopPodSandbox for \"99fe5692040ced0c1878d0a1070124e25963e2bf7230ce142eef720033053268\" returns successfully"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.602533250Z" level=info msg="StopContainer for \"e724582a408a125eba7218650a6cd2025803dc85d11eb904cfa8505398668dda\" with timeout 2 (s)"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.602959684Z" level=info msg="Stop container \"e724582a408a125eba7218650a6cd2025803dc85d11eb904cfa8505398668dda\" with signal terminated"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.791325847Z" level=info msg="RemoveContainer for \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\""
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.798723804Z" level=info msg="RemoveContainer for \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\" returns successfully"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.799261004Z" level=error msg="ContainerStatus for \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\": not found"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.801512456Z" level=info msg="RemoveContainer for \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\""
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.819435302Z" level=info msg="RemoveContainer for \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\" returns successfully"
	Feb 14 02:58:23 addons-107916 containerd[739]: time="2024-02-14T02:58:23.831950850Z" level=error msg="ContainerStatus for \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\": not found"
	
	
	==> coredns [a8b9ad96a4cf381ad63937cb9cd00b9b8fb38ee1eba33858825e01aed6d326a2] <==
	[INFO] 10.244.0.20:48714 - 61020 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084338s
	[INFO] 10.244.0.20:48714 - 23426 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045308s
	[INFO] 10.244.0.20:48714 - 39862 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069422s
	[INFO] 10.244.0.20:48714 - 18198 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000102741s
	[INFO] 10.244.0.20:48714 - 22409 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001469975s
	[INFO] 10.244.0.20:48714 - 14926 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00119155s
	[INFO] 10.244.0.20:48714 - 50223 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081639s
	[INFO] 10.244.0.20:42653 - 48853 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105499s
	[INFO] 10.244.0.20:42653 - 18553 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059436s
	[INFO] 10.244.0.20:35425 - 9339 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000852s
	[INFO] 10.244.0.20:35425 - 16205 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066181s
	[INFO] 10.244.0.20:42653 - 9862 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000117232s
	[INFO] 10.244.0.20:42653 - 19290 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079661s
	[INFO] 10.244.0.20:35425 - 11610 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044052s
	[INFO] 10.244.0.20:35425 - 65115 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000300759s
	[INFO] 10.244.0.20:42653 - 10670 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042083s
	[INFO] 10.244.0.20:35425 - 25109 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000141854s
	[INFO] 10.244.0.20:35425 - 53028 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044585s
	[INFO] 10.244.0.20:42653 - 5577 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034362s
	[INFO] 10.244.0.20:35425 - 16456 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001297065s
	[INFO] 10.244.0.20:35425 - 33151 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001062568s
	[INFO] 10.244.0.20:35425 - 50309 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074255s
	[INFO] 10.244.0.20:42653 - 63917 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004257675s
	[INFO] 10.244.0.20:42653 - 22426 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001077281s
	[INFO] 10.244.0.20:42653 - 35198 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067149s
	
	
	==> describe nodes <==
	Name:               addons-107916
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-107916
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40f210e92693e4612e04be0697de06db21ac5cf0
	                    minikube.k8s.io/name=addons-107916
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T02_55_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-107916
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-107916"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 02:55:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-107916
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 02:58:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 02:58:12 +0000   Wed, 14 Feb 2024 02:55:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 02:58:12 +0000   Wed, 14 Feb 2024 02:55:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 02:58:12 +0000   Wed, 14 Feb 2024 02:55:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 02:58:12 +0000   Wed, 14 Feb 2024 02:55:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-107916
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 244d646cad4e4280924e3223b942fed4
	  System UUID:                281434a1-6832-43d6-8627-858f8134a6ff
	  Boot ID:                    b6f8a130-5377-4a84-9795-3edbfc6d2fc5
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-vrjll           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  gcp-auth                    gcp-auth-d4c87556c-n5vd4                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  headlamp                    headlamp-7ddfbb94ff-59lmx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 coredns-5dd5756b68-frpgv                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m32s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 csi-hostpathplugin-5fqvb                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 etcd-addons-107916                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m45s
	  kube-system                 kindnet-rthjj                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m32s
	  kube-system                 kube-apiserver-addons-107916               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-controller-manager-addons-107916      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-proxy-wqqx2                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-scheduler-addons-107916               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  local-path-storage          local-path-provisioner-78b46b4d5c-h6679    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-mv4gt             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x8 over 2m53s)  kubelet          Node addons-107916 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x8 over 2m53s)  kubelet          Node addons-107916 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x7 over 2m53s)  kubelet          Node addons-107916 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m45s                  kubelet          Node addons-107916 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s                  kubelet          Node addons-107916 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m45s                  kubelet          Node addons-107916 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m45s                  kubelet          Node addons-107916 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m45s                  kubelet          Node addons-107916 status is now: NodeReady
	  Normal  Starting                 2m45s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m33s                  node-controller  Node addons-107916 event: Registered Node addons-107916 in Controller
	
	
	==> dmesg <==
	[  +0.001133] FS-Cache: O-key=[8] '2bd5c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001075] FS-Cache: N-key=[8] '2bd5c90000000000'
	[  +0.002828] FS-Cache: Duplicate cookie detected
	[  +0.000708] FS-Cache: O-cookie c=0000003b [p=00000039 fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=0000000076fc1031
	[  +0.001081] FS-Cache: O-key=[8] '2bd5c90000000000'
	[  +0.000709] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000005e2f857b
	[  +0.001050] FS-Cache: N-key=[8] '2bd5c90000000000'
	[  +2.757072] FS-Cache: Duplicate cookie detected
	[  +0.000789] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000994] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=0000000073828904
	[  +0.001121] FS-Cache: O-key=[8] '2ad5c90000000000'
	[  +0.000813] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001101] FS-Cache: N-key=[8] '2ad5c90000000000'
	[  +0.290556] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000975] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=00000000eab8090b
	[  +0.001047] FS-Cache: O-key=[8] '30d5c90000000000'
	[  +0.000761] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=00000000bc792bf3
	[  +0.001026] FS-Cache: N-key=[8] '30d5c90000000000'
	
	
	==> etcd [873197f66b7ad68ed2fb2cbf1116587a9c2034c96c29937c781284a776a67d44] <==
	{"level":"info","ts":"2024-02-14T02:55:32.472253Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-14T02:55:32.472433Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-14T02:55:32.472458Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-14T02:55:32.472558Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T02:55:32.472568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T02:55:32.47285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-14T02:55:32.472958Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-14T02:55:33.451629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-14T02:55:33.451756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-14T02:55:33.451824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-02-14T02:55:33.451883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-02-14T02:55:33.451927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T02:55:33.45197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-14T02:55:33.45201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T02:55:33.459562Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:55:33.459868Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-107916 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T02:55:33.46004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T02:55:33.461106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-14T02:55:33.461289Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T02:55:33.462211Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T02:55:33.462466Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T02:55:33.463503Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T02:55:33.505947Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:55:33.507576Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:55:33.507609Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [519c86bb0f78be69999f9a4055dcfbcdb019ed71fd8c24bac655163ac496009f] <==
	2024/02/14 02:56:52 GCP Auth Webhook started!
	2024/02/14 02:57:11 Ready to marshal response ...
	2024/02/14 02:57:11 Ready to write response ...
	2024/02/14 02:57:11 Ready to marshal response ...
	2024/02/14 02:57:11 Ready to write response ...
	2024/02/14 02:57:11 Ready to marshal response ...
	2024/02/14 02:57:11 Ready to write response ...
	2024/02/14 02:57:22 Ready to marshal response ...
	2024/02/14 02:57:22 Ready to write response ...
	2024/02/14 02:57:28 Ready to marshal response ...
	2024/02/14 02:57:28 Ready to write response ...
	2024/02/14 02:57:28 Ready to marshal response ...
	2024/02/14 02:57:28 Ready to write response ...
	2024/02/14 02:57:36 Ready to marshal response ...
	2024/02/14 02:57:36 Ready to write response ...
	2024/02/14 02:57:40 Ready to marshal response ...
	2024/02/14 02:57:40 Ready to write response ...
	2024/02/14 02:57:57 Ready to marshal response ...
	2024/02/14 02:57:57 Ready to write response ...
	2024/02/14 02:58:05 Ready to marshal response ...
	2024/02/14 02:58:05 Ready to write response ...
	2024/02/14 02:58:12 Ready to marshal response ...
	2024/02/14 02:58:12 Ready to write response ...
	
	
	==> kernel <==
	 02:58:24 up  5:40,  0 users,  load average: 2.38, 1.60, 1.82
	Linux addons-107916 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [244bda7cba5544ede26d1a161fde871f1f0343ea53826e30e2579932cb6fe3e1] <==
	I0214 02:56:23.552361       1 main.go:227] handling current node
	I0214 02:56:33.567574       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:56:33.567599       1 main.go:227] handling current node
	I0214 02:56:43.576727       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:56:43.576760       1 main.go:227] handling current node
	I0214 02:56:53.580637       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:56:53.580666       1 main.go:227] handling current node
	I0214 02:57:03.593556       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:03.593585       1 main.go:227] handling current node
	I0214 02:57:13.606457       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:13.606486       1 main.go:227] handling current node
	I0214 02:57:23.619513       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:23.619540       1 main.go:227] handling current node
	I0214 02:57:33.624359       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:33.624390       1 main.go:227] handling current node
	I0214 02:57:43.634965       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:43.634990       1 main.go:227] handling current node
	I0214 02:57:53.639167       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:57:53.639198       1 main.go:227] handling current node
	I0214 02:58:03.652747       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:58:03.652777       1 main.go:227] handling current node
	I0214 02:58:13.664129       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:58:13.664157       1 main.go:227] handling current node
	I0214 02:58:23.669283       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 02:58:23.669321       1 main.go:227] handling current node
	
	
	==> kube-apiserver [400b3f8f47c624b7f70d161c4f843d2f25f3215c8c38a178fac0938c3bbfa36c] <==
	W0214 02:57:45.958585       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0214 02:57:50.220776       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0214 02:57:57.506525       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0214 02:57:57.751188       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.160.194"}
	I0214 02:57:58.117491       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0214 02:58:05.521665       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.122.53"}
	I0214 02:58:22.864228       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.864291       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.878527       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.878750       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.895894       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.899313       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.998330       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.998372       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.998442       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.998464       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:22.999335       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:22.999379       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:23.022103       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:23.022893       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 02:58:23.028286       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 02:58:23.028325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0214 02:58:23.999246       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0214 02:58:24.028052       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0214 02:58:24.046522       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [f80836cb769d35d70e4058ed2c08a868ec2ec839d44473cf672960a0c82a2102] <==
	I0214 02:58:05.319464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.089µs"
	I0214 02:58:05.365166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="96.424µs"
	I0214 02:58:06.715389       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0214 02:58:07.758141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.748506ms"
	I0214 02:58:07.758352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.812µs"
	I0214 02:58:08.769405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.107684ms"
	I0214 02:58:08.769468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.794µs"
	I0214 02:58:08.771055       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="default/hello-world-app" err="EndpointSlice informer cache is out of date"
	I0214 02:58:09.751006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.918µs"
	I0214 02:58:10.756055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.459µs"
	I0214 02:58:11.582545       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0214 02:58:22.439925       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:22.439969       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 02:58:22.572800       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0214 02:58:22.576210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="7.828µs"
	I0214 02:58:22.587069       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0214 02:58:23.074947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="8.435µs"
	E0214 02:58:24.001405       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:24.030834       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:24.048398       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 02:58:24.827192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.385µs"
	W0214 02:58:24.936763       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:24.936797       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0214 02:58:25.039207       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 02:58:25.039243       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [fa504d1b8fe727aba4f266a4b61f44c00699490b9f9ea9a99de52d358a959cbd] <==
	I0214 02:55:53.431034       1 server_others.go:69] "Using iptables proxy"
	I0214 02:55:53.464296       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0214 02:55:53.537963       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 02:55:53.540280       1 server_others.go:152] "Using iptables Proxier"
	I0214 02:55:53.540328       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 02:55:53.540337       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 02:55:53.540368       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 02:55:53.540581       1 server.go:846] "Version info" version="v1.28.4"
	I0214 02:55:53.540596       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 02:55:53.542865       1 config.go:188] "Starting service config controller"
	I0214 02:55:53.542887       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 02:55:53.542908       1 config.go:97] "Starting endpoint slice config controller"
	I0214 02:55:53.542911       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 02:55:53.543832       1 config.go:315] "Starting node config controller"
	I0214 02:55:53.543843       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 02:55:53.643879       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0214 02:55:53.643948       1 shared_informer.go:318] Caches are synced for node config
	I0214 02:55:53.643962       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b80dce1302efbac98cbe18a6f823462f0bba917d6788e1dbe962e8e5c877057f] <==
	W0214 02:55:36.509554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 02:55:36.509635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0214 02:55:36.509789       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 02:55:36.509874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0214 02:55:36.509985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 02:55:36.510058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0214 02:55:36.510161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 02:55:36.510229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0214 02:55:36.510337       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 02:55:36.510387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 02:55:36.510609       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 02:55:36.511532       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 02:55:37.361948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 02:55:37.362040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 02:55:37.410377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0214 02:55:37.410419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0214 02:55:37.442568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0214 02:55:37.442810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0214 02:55:37.496612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 02:55:37.496903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0214 02:55:37.588656       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 02:55:37.588868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0214 02:55:37.629756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 02:55:37.629995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0214 02:55:38.095399       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 02:58:21 addons-107916 kubelet[1338]: I0214 02:58:21.755613    1338 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5nrr\" (UniqueName: \"kubernetes.io/projected/6015e7be-aeae-4d2f-a1ee-3f92e61da1e5-kube-api-access-p5nrr\") pod \"6015e7be-aeae-4d2f-a1ee-3f92e61da1e5\" (UID: \"6015e7be-aeae-4d2f-a1ee-3f92e61da1e5\") "
	Feb 14 02:58:21 addons-107916 kubelet[1338]: I0214 02:58:21.759797    1338 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6015e7be-aeae-4d2f-a1ee-3f92e61da1e5-kube-api-access-p5nrr" (OuterVolumeSpecName: "kube-api-access-p5nrr") pod "6015e7be-aeae-4d2f-a1ee-3f92e61da1e5" (UID: "6015e7be-aeae-4d2f-a1ee-3f92e61da1e5"). InnerVolumeSpecName "kube-api-access-p5nrr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 14 02:58:21 addons-107916 kubelet[1338]: I0214 02:58:21.779011    1338 scope.go:117] "RemoveContainer" containerID="9c77c164d07ea9c7f626bc93b813344276940e44c048bc6e5b0217f875e4c3aa"
	Feb 14 02:58:21 addons-107916 kubelet[1338]: I0214 02:58:21.856844    1338 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p5nrr\" (UniqueName: \"kubernetes.io/projected/6015e7be-aeae-4d2f-a1ee-3f92e61da1e5-kube-api-access-p5nrr\") on node \"addons-107916\" DevicePath \"\""
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.368559    1338 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsb7b\" (UniqueName: \"kubernetes.io/projected/a3c8748f-3bbf-450a-8ab8-f682dc3540b3-kube-api-access-rsb7b\") pod \"a3c8748f-3bbf-450a-8ab8-f682dc3540b3\" (UID: \"a3c8748f-3bbf-450a-8ab8-f682dc3540b3\") "
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.388964    1338 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c8748f-3bbf-450a-8ab8-f682dc3540b3-kube-api-access-rsb7b" (OuterVolumeSpecName: "kube-api-access-rsb7b") pod "a3c8748f-3bbf-450a-8ab8-f682dc3540b3" (UID: "a3c8748f-3bbf-450a-8ab8-f682dc3540b3"). InnerVolumeSpecName "kube-api-access-rsb7b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.469096    1338 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b9d4\" (UniqueName: \"kubernetes.io/projected/05d7fa06-002b-46bb-bfca-2acdd4c8d6c1-kube-api-access-2b9d4\") pod \"05d7fa06-002b-46bb-bfca-2acdd4c8d6c1\" (UID: \"05d7fa06-002b-46bb-bfca-2acdd4c8d6c1\") "
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.469389    1338 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rsb7b\" (UniqueName: \"kubernetes.io/projected/a3c8748f-3bbf-450a-8ab8-f682dc3540b3-kube-api-access-rsb7b\") on node \"addons-107916\" DevicePath \"\""
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.471169    1338 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d7fa06-002b-46bb-bfca-2acdd4c8d6c1-kube-api-access-2b9d4" (OuterVolumeSpecName: "kube-api-access-2b9d4") pod "05d7fa06-002b-46bb-bfca-2acdd4c8d6c1" (UID: "05d7fa06-002b-46bb-bfca-2acdd4c8d6c1"). InnerVolumeSpecName "kube-api-access-2b9d4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.569765    1338 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2b9d4\" (UniqueName: \"kubernetes.io/projected/05d7fa06-002b-46bb-bfca-2acdd4c8d6c1-kube-api-access-2b9d4\") on node \"addons-107916\" DevicePath \"\""
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.616205    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6015e7be-aeae-4d2f-a1ee-3f92e61da1e5" path="/var/lib/kubelet/pods/6015e7be-aeae-4d2f-a1ee-3f92e61da1e5/volumes"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.616692    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="60481df1-66d8-42ed-b156-09cc2f49055d" path="/var/lib/kubelet/pods/60481df1-66d8-42ed-b156-09cc2f49055d/volumes"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.617152    1338 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d75f039e-8f96-4e34-9d6c-2cad4e54eb36" path="/var/lib/kubelet/pods/d75f039e-8f96-4e34-9d6c-2cad4e54eb36/volumes"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.788813    1338 scope.go:117] "RemoveContainer" containerID="732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.799007    1338 scope.go:117] "RemoveContainer" containerID="732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: E0214 02:58:23.799460    1338 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\": not found" containerID="732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.799564    1338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466"} err="failed to get container status \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\": rpc error: code = NotFound desc = an error occurred when try to find container \"732c4542a0bbd36c2581f60aaced83ff9a4ac14618a1e9c223a4e6ba652f8466\": not found"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.799579    1338 scope.go:117] "RemoveContainer" containerID="3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.822074    1338 scope.go:117] "RemoveContainer" containerID="3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: E0214 02:58:23.835283    1338 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\": not found" containerID="3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"
	Feb 14 02:58:23 addons-107916 kubelet[1338]: I0214 02:58:23.835379    1338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189"} err="failed to get container status \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b6f094381065667ffc429dc21ebda94cc7ba68e3e0c1fd625e6a1c6e3e4a189\": not found"
	Feb 14 02:58:24 addons-107916 kubelet[1338]: I0214 02:58:24.612962    1338 scope.go:117] "RemoveContainer" containerID="faf8c51efff8f7b2b0ea8ee6d04dc9b4f667062dba51e412a107873db892cf63"
	Feb 14 02:58:24 addons-107916 kubelet[1338]: I0214 02:58:24.808645    1338 scope.go:117] "RemoveContainer" containerID="faf8c51efff8f7b2b0ea8ee6d04dc9b4f667062dba51e412a107873db892cf63"
	Feb 14 02:58:24 addons-107916 kubelet[1338]: I0214 02:58:24.809019    1338 scope.go:117] "RemoveContainer" containerID="3b2cdb1a2ee3c26c7f3297bbe0e4b65850cbf4fa5fba6512d177e9b99fbb3be3"
	Feb 14 02:58:24 addons-107916 kubelet[1338]: E0214 02:58:24.809320    1338 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-vrjll_default(a2adc08d-1c4a-4481-9f88-609698caed6a)\"" pod="default/hello-world-app-5d77478584-vrjll" podUID="a2adc08d-1c4a-4481-9f88-609698caed6a"
	
	
	==> storage-provisioner [7aa64a9ad4c3a86542d0495ced8fe5b123bc91517f08aeda2ce997fdcb9b6f54] <==
	I0214 02:55:59.233959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 02:55:59.256510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 02:55:59.256564       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 02:55:59.266307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 02:55:59.270146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-107916_763b5459-63ae-4459-9bdb-9e0466a6ab53!
	I0214 02:55:59.273139       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15f0e442-57af-411b-b639-9f6ff974b2a2", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-107916_763b5459-63ae-4459-9bdb-9e0466a6ab53 became leader
	I0214 02:55:59.371754       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-107916_763b5459-63ae-4459-9bdb-9e0466a6ab53!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-107916 -n addons-107916
helpers_test.go:261: (dbg) Run:  kubectl --context addons-107916 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (48.55s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (23.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-991896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-991896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (16.674242719s)

                                                
                                                
-- stdout --
	* [functional-991896] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node functional-991896 in cluster functional-991896
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Updating the running docker "functional-991896" container ...
	* Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 03:02:07.515963 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	E0214 03:02:07.523273 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "etcd-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	E0214 03:02:07.531165 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-apiserver-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	E0214 03:02:07.670862 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-controller-manager-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	E0214 03:02:08.069990 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-proxy-kd7sf" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	E0214 03:02:08.466818 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-scheduler-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	E0214 03:02:08.481895 1160645 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0214 03:02:08.683135 1160645 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IPX Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-991896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 16.674468444s for "functional-991896" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-991896
helpers_test.go:235: (dbg) docker inspect functional-991896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738",
	        "Created": "2024-02-14T03:00:44.23705449Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1156915,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:00:44.557320955Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738/hostname",
	        "HostsPath": "/var/lib/docker/containers/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738/hosts",
	        "LogPath": "/var/lib/docker/containers/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738-json.log",
	        "Name": "/functional-991896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-991896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-991896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cfe668a3012eefc81675181b0604d21a4f24c834b18b63f4f28673af93542e5e-init/diff:/var/lib/docker/overlay2/2b57dacbb0185892ad2774651ca7e304a0e7ce49c55385fdb5828fd98438b35e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfe668a3012eefc81675181b0604d21a4f24c834b18b63f4f28673af93542e5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfe668a3012eefc81675181b0604d21a4f24c834b18b63f4f28673af93542e5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfe668a3012eefc81675181b0604d21a4f24c834b18b63f4f28673af93542e5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-991896",
	                "Source": "/var/lib/docker/volumes/functional-991896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-991896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-991896",
	                "name.minikube.sigs.k8s.io": "functional-991896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6162f4aaee34d63ba1a12e89259b5dea9a35e91978e41267ff63a574214e87c3",
	            "SandboxKey": "/var/run/docker/netns/6162f4aaee34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34047"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34043"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34045"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34044"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-991896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d26660904130",
	                        "functional-991896"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "cd73f64c59256848a6ae734282eb73b16b89679a5c149f9a0d8d2967e49ff9f2",
	                    "EndpointID": "443e9dcfb8391accd495bd72524aa18cfe6117b96990a88737a19f584b83cc18",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-991896",
	                        "d26660904130"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-991896 -n functional-991896
E0214 03:02:09.897469 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:09.903822 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:09.914085 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:09.934331 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:09.974596 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:10.054867 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:10.215291 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:10.535773 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:11.176599 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:02:12.456828 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
helpers_test.go:239: (dbg) Done: out/minikube-linux-arm64 status --format={{.Host}} -p functional-991896 -n functional-991896: (4.907948017s)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 logs -n 25
E0214 03:02:15.017010 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 logs -n 25: (1.666115999s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-013257                                                         | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	| start   | -p functional-991896                                                     | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:01 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-991896                                                     | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-991896 cache add                                              | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-991896 cache add                                              | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-991896 cache add                                              | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-991896 cache add                                              | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | minikube-local-cache-test:functional-991896                              |                   |         |         |                     |                     |
	| cache   | functional-991896 cache delete                                           | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | minikube-local-cache-test:functional-991896                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	| ssh     | functional-991896 ssh sudo                                               | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-991896                                                        | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-991896 ssh                                                    | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-991896 cache reload                                           | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	| ssh     | functional-991896 ssh                                                    | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-991896 kubectl --                                             | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | --context functional-991896                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-991896                                                     | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 03:01:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 03:01:52.083370 1160645 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:01:52.083564 1160645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:01:52.083568 1160645 out.go:304] Setting ErrFile to fd 2...
	I0214 03:01:52.083573 1160645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:01:52.083918 1160645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:01:52.084348 1160645 out.go:298] Setting JSON to false
	I0214 03:01:52.085873 1160645 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20658,"bootTime":1707859054,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 03:01:52.085960 1160645 start.go:138] virtualization:  
	I0214 03:01:52.088831 1160645 out.go:177] * [functional-991896] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 03:01:52.091400 1160645 out.go:177]   - MINIKUBE_LOCATION=18166
	I0214 03:01:52.093261 1160645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:01:52.091596 1160645 notify.go:220] Checking for updates...
	I0214 03:01:52.097987 1160645 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:01:52.100119 1160645 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 03:01:52.102273 1160645 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:01:52.104356 1160645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:01:52.106953 1160645 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:01:52.107046 1160645 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:01:52.128444 1160645 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:01:52.128558 1160645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:01:52.207780 1160645 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:69 SystemTime:2024-02-14 03:01:52.198306693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:01:52.207872 1160645 docker.go:295] overlay module found
	I0214 03:01:52.209947 1160645 out.go:177] * Using the docker driver based on existing profile
	I0214 03:01:52.211653 1160645 start.go:298] selected driver: docker
	I0214 03:01:52.211662 1160645 start.go:902] validating driver "docker" against &{Name:functional-991896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:01:52.211745 1160645 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:01:52.211846 1160645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:01:52.285703 1160645 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:69 SystemTime:2024-02-14 03:01:52.276640232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:01:52.286125 1160645 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 03:01:52.286167 1160645 cni.go:84] Creating CNI manager for ""
	I0214 03:01:52.286175 1160645 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:01:52.286186 1160645 start_flags.go:321] config:
	{Name:functional-991896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:01:52.289608 1160645 out.go:177] * Starting control plane node functional-991896 in cluster functional-991896
	I0214 03:01:52.291629 1160645 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0214 03:01:52.293788 1160645 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 03:01:52.295763 1160645 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 03:01:52.295849 1160645 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 03:01:52.295841 1160645 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0214 03:01:52.295866 1160645 cache.go:56] Caching tarball of preloaded images
	I0214 03:01:52.296049 1160645 preload.go:174] Found /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0214 03:01:52.296062 1160645 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0214 03:01:52.296191 1160645 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/config.json ...
	I0214 03:01:52.311744 1160645 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 03:01:52.311759 1160645 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 03:01:52.311782 1160645 cache.go:194] Successfully downloaded all kic artifacts
	I0214 03:01:52.311818 1160645 start.go:365] acquiring machines lock for functional-991896: {Name:mk593e53724b0278df4a8322a2172870edf53457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 03:01:52.311895 1160645 start.go:369] acquired machines lock for "functional-991896" in 58.213µs
	I0214 03:01:52.311916 1160645 start.go:96] Skipping create...Using existing machine configuration
	I0214 03:01:52.311921 1160645 fix.go:54] fixHost starting: 
	I0214 03:01:52.312201 1160645 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	I0214 03:01:52.328082 1160645 fix.go:102] recreateIfNeeded on functional-991896: state=Running err=<nil>
	W0214 03:01:52.328108 1160645 fix.go:128] unexpected machine state, will restart: <nil>
	I0214 03:01:52.330287 1160645 out.go:177] * Updating the running docker "functional-991896" container ...
	I0214 03:01:52.332266 1160645 machine.go:88] provisioning docker machine ...
	I0214 03:01:52.332286 1160645 ubuntu.go:169] provisioning hostname "functional-991896"
	I0214 03:01:52.332357 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:52.349504 1160645 main.go:141] libmachine: Using SSH client type: native
	I0214 03:01:52.349934 1160645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0214 03:01:52.349945 1160645 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-991896 && echo "functional-991896" | sudo tee /etc/hostname
	I0214 03:01:52.501267 1160645 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-991896
	
	I0214 03:01:52.501348 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:52.520680 1160645 main.go:141] libmachine: Using SSH client type: native
	I0214 03:01:52.521221 1160645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0214 03:01:52.521244 1160645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-991896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-991896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-991896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 03:01:52.656536 1160645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 03:01:52.656552 1160645 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18166-1129740/.minikube CaCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18166-1129740/.minikube}
	I0214 03:01:52.656575 1160645 ubuntu.go:177] setting up certificates
	I0214 03:01:52.656583 1160645 provision.go:83] configureAuth start
	I0214 03:01:52.656650 1160645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-991896
	I0214 03:01:52.681850 1160645 provision.go:138] copyHostCerts
	I0214 03:01:52.681908 1160645 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem, removing ...
	I0214 03:01:52.681916 1160645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem
	I0214 03:01:52.681995 1160645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem (1082 bytes)
	I0214 03:01:52.682090 1160645 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem, removing ...
	I0214 03:01:52.682094 1160645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem
	I0214 03:01:52.682120 1160645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem (1123 bytes)
	I0214 03:01:52.682177 1160645 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem, removing ...
	I0214 03:01:52.682181 1160645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem
	I0214 03:01:52.682204 1160645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem (1675 bytes)
	I0214 03:01:52.682243 1160645 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem org=jenkins.functional-991896 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-991896]
	I0214 03:01:53.280022 1160645 provision.go:172] copyRemoteCerts
	I0214 03:01:53.280085 1160645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 03:01:53.280125 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.296717 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.392513 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0214 03:01:53.419618 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 03:01:53.444751 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 03:01:53.472216 1160645 provision.go:86] duration metric: configureAuth took 815.619987ms
	I0214 03:01:53.472245 1160645 ubuntu.go:193] setting minikube options for container-runtime
	I0214 03:01:53.472467 1160645 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:01:53.472475 1160645 machine.go:91] provisioned docker machine in 1.14019944s
	I0214 03:01:53.472482 1160645 start.go:300] post-start starting for "functional-991896" (driver="docker")
	I0214 03:01:53.472492 1160645 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 03:01:53.472542 1160645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 03:01:53.472580 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.489921 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.585090 1160645 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 03:01:53.588469 1160645 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 03:01:53.588496 1160645 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 03:01:53.588505 1160645 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 03:01:53.588512 1160645 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 03:01:53.588521 1160645 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/addons for local assets ...
	I0214 03:01:53.588582 1160645 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/files for local assets ...
	I0214 03:01:53.588668 1160645 filesync.go:149] local asset: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem -> 11350872.pem in /etc/ssl/certs
	I0214 03:01:53.588756 1160645 filesync.go:149] local asset: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/test/nested/copy/1135087/hosts -> hosts in /etc/test/nested/copy/1135087
	I0214 03:01:53.588800 1160645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1135087
	I0214 03:01:53.597903 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem --> /etc/ssl/certs/11350872.pem (1708 bytes)
	I0214 03:01:53.622706 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/test/nested/copy/1135087/hosts --> /etc/test/nested/copy/1135087/hosts (40 bytes)
	I0214 03:01:53.647947 1160645 start.go:303] post-start completed in 175.450645ms
	I0214 03:01:53.648019 1160645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:01:53.648071 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.665594 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.756579 1160645 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 03:01:53.761457 1160645 fix.go:56] fixHost completed within 1.449528128s
	I0214 03:01:53.761472 1160645 start.go:83] releasing machines lock for "functional-991896", held for 1.449569579s
	I0214 03:01:53.761571 1160645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-991896
	I0214 03:01:53.778518 1160645 ssh_runner.go:195] Run: cat /version.json
	I0214 03:01:53.778561 1160645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 03:01:53.778565 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.778662 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.807750 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.808448 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.903099 1160645 ssh_runner.go:195] Run: systemctl --version
	I0214 03:01:54.044926 1160645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 03:01:54.049547 1160645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0214 03:01:54.068118 1160645 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0214 03:01:54.068214 1160645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 03:01:54.077894 1160645 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0214 03:01:54.077909 1160645 start.go:475] detecting cgroup driver to use...
	I0214 03:01:54.077939 1160645 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 03:01:54.077987 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0214 03:01:54.091133 1160645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0214 03:01:54.103582 1160645 docker.go:217] disabling cri-docker service (if available) ...
	I0214 03:01:54.103651 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 03:01:54.118267 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 03:01:54.130311 1160645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 03:01:54.243377 1160645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 03:01:54.363262 1160645 docker.go:233] disabling docker service ...
	I0214 03:01:54.363335 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 03:01:54.376532 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 03:01:54.388637 1160645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 03:01:54.497704 1160645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 03:01:54.614435 1160645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 03:01:54.629108 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 03:01:54.648623 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0214 03:01:54.659091 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0214 03:01:54.669120 1160645 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0214 03:01:54.669179 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0214 03:01:54.679356 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:01:54.689903 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0214 03:01:54.700106 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:01:54.710262 1160645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 03:01:54.719723 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0214 03:01:54.730033 1160645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 03:01:54.738693 1160645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 03:01:54.746902 1160645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:01:54.849330 1160645 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 03:01:55.062851 1160645 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0214 03:01:55.062926 1160645 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0214 03:01:55.067246 1160645 start.go:543] Will wait 60s for crictl version
	I0214 03:01:55.067303 1160645 ssh_runner.go:195] Run: which crictl
	I0214 03:01:55.071302 1160645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 03:01:55.112631 1160645 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0214 03:01:55.112739 1160645 ssh_runner.go:195] Run: containerd --version
	I0214 03:01:55.148154 1160645 ssh_runner.go:195] Run: containerd --version
	I0214 03:01:55.181562 1160645 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0214 03:01:55.183674 1160645 cli_runner.go:164] Run: docker network inspect functional-991896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 03:01:55.199716 1160645 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 03:01:55.205640 1160645 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0214 03:01:55.207836 1160645 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 03:01:55.207919 1160645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 03:01:55.244557 1160645 containerd.go:612] all images are preloaded for containerd runtime.
	I0214 03:01:55.244569 1160645 containerd.go:519] Images already preloaded, skipping extraction
	I0214 03:01:55.244631 1160645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 03:01:55.292781 1160645 containerd.go:612] all images are preloaded for containerd runtime.
	I0214 03:01:55.292794 1160645 cache_images.go:84] Images are preloaded, skipping loading
	I0214 03:01:55.292863 1160645 ssh_runner.go:195] Run: sudo crictl info
	I0214 03:01:55.329890 1160645 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0214 03:01:55.329913 1160645 cni.go:84] Creating CNI manager for ""
	I0214 03:01:55.329921 1160645 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:01:55.329931 1160645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 03:01:55.329951 1160645 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-991896 NodeName:functional-991896 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 03:01:55.330073 1160645 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-991896"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 03:01:55.330141 1160645 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-991896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0214 03:01:55.330215 1160645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0214 03:01:55.339335 1160645 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 03:01:55.339400 1160645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 03:01:55.348243 1160645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0214 03:01:55.366814 1160645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 03:01:55.384635 1160645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0214 03:01:55.404750 1160645 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 03:01:55.408397 1160645 certs.go:56] Setting up /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896 for IP: 192.168.49.2
	I0214 03:01:55.408419 1160645 certs.go:190] acquiring lock for shared ca certs: {Name:mk121f32762802a204d98d3cbcae9456442a0756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:01:55.408573 1160645 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key
	I0214 03:01:55.408633 1160645 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key
	I0214 03:01:55.408709 1160645 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.key
	I0214 03:01:55.408752 1160645 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/apiserver.key.dd3b5fb2
	I0214 03:01:55.408791 1160645 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/proxy-client.key
	I0214 03:01:55.408909 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087.pem (1338 bytes)
	W0214 03:01:55.408937 1160645 certs.go:433] ignoring /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087_empty.pem, impossibly tiny 0 bytes
	I0214 03:01:55.408946 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 03:01:55.408971 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem (1082 bytes)
	I0214 03:01:55.408992 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem (1123 bytes)
	I0214 03:01:55.409019 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem (1675 bytes)
	I0214 03:01:55.409064 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem (1708 bytes)
	I0214 03:01:55.409768 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 03:01:55.435883 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 03:01:55.466945 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 03:01:55.493522 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 03:01:55.518617 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 03:01:55.545999 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 03:01:55.572271 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 03:01:55.598777 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 03:01:55.624603 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem --> /usr/share/ca-certificates/11350872.pem (1708 bytes)
	I0214 03:01:55.649804 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 03:01:55.674633 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087.pem --> /usr/share/ca-certificates/1135087.pem (1338 bytes)
	I0214 03:01:55.700144 1160645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 03:01:55.719367 1160645 ssh_runner.go:195] Run: openssl version
	I0214 03:01:55.725630 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 03:01:55.735914 1160645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:01:55.739276 1160645 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:55 /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:01:55.739330 1160645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:01:55.746526 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 03:01:55.755807 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1135087.pem && ln -fs /usr/share/ca-certificates/1135087.pem /etc/ssl/certs/1135087.pem"
	I0214 03:01:55.765290 1160645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1135087.pem
	I0214 03:01:55.768738 1160645 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 03:00 /usr/share/ca-certificates/1135087.pem
	I0214 03:01:55.768794 1160645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1135087.pem
	I0214 03:01:55.775800 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1135087.pem /etc/ssl/certs/51391683.0"
	I0214 03:01:55.784745 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11350872.pem && ln -fs /usr/share/ca-certificates/11350872.pem /etc/ssl/certs/11350872.pem"
	I0214 03:01:55.794186 1160645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11350872.pem
	I0214 03:01:55.797751 1160645 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 03:00 /usr/share/ca-certificates/11350872.pem
	I0214 03:01:55.797808 1160645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11350872.pem
	I0214 03:01:55.805681 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11350872.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 03:01:55.815012 1160645 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 03:01:55.818461 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0214 03:01:55.825220 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0214 03:01:55.832494 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0214 03:01:55.839551 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0214 03:01:55.846516 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0214 03:01:55.854046 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0214 03:01:55.861374 1160645 kubeadm.go:404] StartCluster: {Name:functional-991896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:01:55.861455 1160645 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0214 03:01:55.861532 1160645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 03:01:55.899234 1160645 cri.go:89] found id: "e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786"
	I0214 03:01:55.899247 1160645 cri.go:89] found id: "307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832"
	I0214 03:01:55.899252 1160645 cri.go:89] found id: "0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f"
	I0214 03:01:55.899256 1160645 cri.go:89] found id: "4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364"
	I0214 03:01:55.899259 1160645 cri.go:89] found id: "c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6"
	I0214 03:01:55.899263 1160645 cri.go:89] found id: "a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6"
	I0214 03:01:55.899267 1160645 cri.go:89] found id: "28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c"
	I0214 03:01:55.899270 1160645 cri.go:89] found id: "e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a"
	I0214 03:01:55.899274 1160645 cri.go:89] found id: "b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3"
	I0214 03:01:55.899287 1160645 cri.go:89] found id: ""
	I0214 03:01:55.899337 1160645 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0214 03:01:55.932616 1160645 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa","pid":1654,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa/rootfs","created":"2024-02-14T03:01:20.359446991Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_3c8003a7-b2ec-4b9f-976e-b4eb23488340","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cr
i.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3c8003a7-b2ec-4b9f-976e-b4eb23488340"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f","pid":1871,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f/rootfs","created":"2024-02-14T03:01:21.108360966Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10","io.kubernetes.cri.sandbox-name":"kindnet-mh6zx","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d461098d-546c-422d-900a-eaa6fe79164a"},"own
er":"root"},{"ociVersion":"1.0.2-dev","id":"0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8","pid":2093,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8/rootfs","created":"2024-02-14T03:01:34.884131219Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-jvd5k_79cf7d44-3393-4acc-9a89-8c2696428c1f","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-jvd5k","io.kubernetes.cri.sandbox-namespace":"kube-
system","io.kubernetes.cri.sandbox-uid":"79cf7d44-3393-4acc-9a89-8c2696428c1f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c","pid":1305,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c/rootfs","created":"2024-02-14T03:00:58.739381517Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri.sandbox-id":"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1dbc4f3504298fd95e33ef4f99ee62f2"},"owner":"root"},{"oc
iVersion":"1.0.2-dev","id":"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04","pid":1176,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04/rootfs","created":"2024-02-14T03:00:58.545605517Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-991896_1dbc4f3504298fd95e33ef4f99ee62f2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.
kubernetes.cri.sandbox-uid":"1dbc4f3504298fd95e33ef4f99ee62f2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832","pid":2123,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832/rootfs","created":"2024-02-14T03:01:34.964141072Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-jvd5k","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"79cf7d44-3393-4acc-9a89-8c2696428c1f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id"
:"4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364","pid":1805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364/rootfs","created":"2024-02-14T03:01:20.934778227Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri.sandbox-id":"dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a","io.kubernetes.cri.sandbox-name":"kube-proxy-kd7sf","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"309a145a-a578-407d-93ac-e7b34f958c71"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781","pid":1150,"status":"running","bundle":"/run/contain
erd/io.containerd.runtime.v2.task/k8s.io/5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781/rootfs","created":"2024-02-14T03:00:58.500044827Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-991896_e8f785d6d77d9f3c8770b2490e72cd74","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e8f785d6d77d9f3c8770b2490e72cd74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a294c551d311a63
55104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10","pid":1738,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10/rootfs","created":"2024-02-14T03:01:20.808052485Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mh6zx_d461098d-546c-422d-900a-eaa6fe79164a","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mh6zx","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d461098d-546c-422d-900a-eaa6fe79164a"}
,"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6","pid":1337,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6/rootfs","created":"2024-02-14T03:00:58.810572449Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri.sandbox-id":"f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d73044a0de4a1a0c1234a6cffddf6a7b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894f
c985b9008dac3","pid":1238,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3/rootfs","created":"2024-02-14T03:00:58.631247363Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722","io.kubernetes.cri.sandbox-name":"etcd-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"815f2ec0a361159dadd056561a46fc5c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722","pid":1114,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d929a556f2a64645f85c3e
048773ec01fd8f6af8143dfb8818b99b9e4d3e1722","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722/rootfs","created":"2024-02-14T03:00:58.483431403Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-991896_815f2ec0a361159dadd056561a46fc5c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"815f2ec0a361159dadd056561a46fc5c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a","pid":1776,"status":"running","bundle":"/run/contai
nerd/io.containerd.runtime.v2.task/k8s.io/dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a/rootfs","created":"2024-02-14T03:01:20.815759878Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-kd7sf_309a145a-a578-407d-93ac-e7b34f958c71","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-kd7sf","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"309a145a-a578-407d-93ac-e7b34f958c71"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a64
9fb2a","pid":1271,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a/rootfs","created":"2024-02-14T03:00:58.690401808Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri.sandbox-id":"5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e8f785d6d77d9f3c8770b2490e72cd74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786","pid":2907,"status":"running","bundle":"/run/containerd/io.contain
erd.runtime.v2.task/k8s.io/e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786/rootfs","created":"2024-02-14T03:01:51.036272833Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3c8003a7-b2ec-4b9f-976e-b4eb23488340"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154","pid":1194,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154","rootfs
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154/rootfs","created":"2024-02-14T03:00:58.569198989Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-991896_d73044a0de4a1a0c1234a6cffddf6a7b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d73044a0de4a1a0c1234a6cffddf6a7b"},"owner":"root"}]
	I0214 03:01:55.932901 1160645 cri.go:126] list returned 16 containers
	I0214 03:01:55.932909 1160645 cri.go:129] container: {ID:06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa Status:running}
	I0214 03:01:55.932924 1160645 cri.go:131] skipping 06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa - not in ps
	I0214 03:01:55.932929 1160645 cri.go:129] container: {ID:0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f Status:running}
	I0214 03:01:55.932934 1160645 cri.go:135] skipping {0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f running}: state = "running", want "paused"
	I0214 03:01:55.932942 1160645 cri.go:129] container: {ID:0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8 Status:running}
	I0214 03:01:55.932947 1160645 cri.go:131] skipping 0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8 - not in ps
	I0214 03:01:55.932952 1160645 cri.go:129] container: {ID:28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c Status:running}
	I0214 03:01:55.932957 1160645 cri.go:135] skipping {28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c running}: state = "running", want "paused"
	I0214 03:01:55.932963 1160645 cri.go:129] container: {ID:301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04 Status:running}
	I0214 03:01:55.932968 1160645 cri.go:131] skipping 301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04 - not in ps
	I0214 03:01:55.932972 1160645 cri.go:129] container: {ID:307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 Status:running}
	I0214 03:01:55.932978 1160645 cri.go:135] skipping {307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 running}: state = "running", want "paused"
	I0214 03:01:55.932983 1160645 cri.go:129] container: {ID:4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 Status:running}
	I0214 03:01:55.932989 1160645 cri.go:135] skipping {4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 running}: state = "running", want "paused"
	I0214 03:01:55.932993 1160645 cri.go:129] container: {ID:5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781 Status:running}
	I0214 03:01:55.933002 1160645 cri.go:131] skipping 5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781 - not in ps
	I0214 03:01:55.933006 1160645 cri.go:129] container: {ID:6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10 Status:running}
	I0214 03:01:55.933014 1160645 cri.go:131] skipping 6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10 - not in ps
	I0214 03:01:55.933019 1160645 cri.go:129] container: {ID:a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 Status:running}
	I0214 03:01:55.933024 1160645 cri.go:135] skipping {a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 running}: state = "running", want "paused"
	I0214 03:01:55.933029 1160645 cri.go:129] container: {ID:b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3 Status:running}
	I0214 03:01:55.933035 1160645 cri.go:135] skipping {b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3 running}: state = "running", want "paused"
	I0214 03:01:55.933040 1160645 cri.go:129] container: {ID:d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722 Status:running}
	I0214 03:01:55.933045 1160645 cri.go:131] skipping d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722 - not in ps
	I0214 03:01:55.933049 1160645 cri.go:129] container: {ID:dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a Status:running}
	I0214 03:01:55.933054 1160645 cri.go:131] skipping dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a - not in ps
	I0214 03:01:55.933058 1160645 cri.go:129] container: {ID:e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a Status:running}
	I0214 03:01:55.933064 1160645 cri.go:135] skipping {e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a running}: state = "running", want "paused"
	I0214 03:01:55.933069 1160645 cri.go:129] container: {ID:e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 Status:running}
	I0214 03:01:55.933074 1160645 cri.go:135] skipping {e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 running}: state = "running", want "paused"
	I0214 03:01:55.933079 1160645 cri.go:129] container: {ID:f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154 Status:running}
	I0214 03:01:55.933085 1160645 cri.go:131] skipping f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154 - not in ps
	I0214 03:01:55.933150 1160645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 03:01:55.942549 1160645 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0214 03:01:55.942560 1160645 kubeadm.go:636] restartCluster start
	I0214 03:01:55.942618 1160645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0214 03:01:55.951158 1160645 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0214 03:01:55.951713 1160645 kubeconfig.go:92] found "functional-991896" server: "https://192.168.49.2:8441"
	I0214 03:01:55.953081 1160645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0214 03:01:55.962071 1160645 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-02-14 03:00:49.859599608 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-02-14 03:01:55.397711128 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0214 03:01:55.962081 1160645 kubeadm.go:1135] stopping kube-system containers ...
	I0214 03:01:55.962094 1160645 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0214 03:01:55.962154 1160645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 03:01:56.006593 1160645 cri.go:89] found id: "e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786"
	I0214 03:01:56.006611 1160645 cri.go:89] found id: "307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832"
	I0214 03:01:56.006616 1160645 cri.go:89] found id: "0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f"
	I0214 03:01:56.006619 1160645 cri.go:89] found id: "4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364"
	I0214 03:01:56.006623 1160645 cri.go:89] found id: "c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6"
	I0214 03:01:56.006626 1160645 cri.go:89] found id: "a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6"
	I0214 03:01:56.006629 1160645 cri.go:89] found id: "28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c"
	I0214 03:01:56.006633 1160645 cri.go:89] found id: "e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a"
	I0214 03:01:56.006636 1160645 cri.go:89] found id: "b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3"
	I0214 03:01:56.006643 1160645 cri.go:89] found id: ""
	I0214 03:01:56.006647 1160645 cri.go:234] Stopping containers: [e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f 4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6 a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3]
	I0214 03:01:56.006727 1160645 ssh_runner.go:195] Run: which crictl
	I0214 03:01:56.011579 1160645 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f 4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6 a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3
	I0214 03:02:01.252643 1160645 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f 4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6 a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3: (5.241021437s)
	W0214 03:02:01.252702 1160645 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f 4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6 a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3: Process exited with status 1
	stdout:
	e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786
	307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832
	0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f
	4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364
	
	stderr:
	E0214 03:02:01.249589    3378 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6\": not found" containerID="c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6"
	time="2024-02-14T03:02:01Z" level=fatal msg="stopping the container \"c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6\": not found"
	I0214 03:02:01.252763 1160645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0214 03:02:01.312005 1160645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 03:02:01.321346 1160645 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 14 03:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 14 03:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 14 03:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 03:00 /etc/kubernetes/scheduler.conf
	
	I0214 03:02:01.321402 1160645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0214 03:02:01.330748 1160645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0214 03:02:01.339868 1160645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0214 03:02:01.351063 1160645 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 03:02:01.351120 1160645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 03:02:01.360311 1160645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0214 03:02:01.369318 1160645 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 03:02:01.369373 1160645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 03:02:01.377918 1160645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 03:02:01.387250 1160645 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0214 03:02:01.387264 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:01.447255 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:05.951376 1160645 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.504092911s)
	I0214 03:02:05.951399 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:06.148025 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:06.241669 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:06.374850 1160645 api_server.go:52] waiting for apiserver process to appear ...
	I0214 03:02:06.374926 1160645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 03:02:06.390740 1160645 api_server.go:72] duration metric: took 15.895868ms to wait for apiserver process to appear ...
	I0214 03:02:06.390755 1160645 api_server.go:88] waiting for apiserver healthz status ...
	I0214 03:02:06.390772 1160645 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0214 03:02:06.405145 1160645 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0214 03:02:06.432549 1160645 api_server.go:141] control plane version: v1.28.4
	I0214 03:02:06.432569 1160645 api_server.go:131] duration metric: took 41.808199ms to wait for apiserver health ...
	I0214 03:02:06.432577 1160645 cni.go:84] Creating CNI manager for ""
	I0214 03:02:06.432583 1160645 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:02:06.434871 1160645 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 03:02:06.437188 1160645 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 03:02:06.444003 1160645 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0214 03:02:06.444018 1160645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 03:02:06.495459 1160645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 03:02:06.865891 1160645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 03:02:06.874359 1160645 system_pods.go:59] 8 kube-system pods found
	I0214 03:02:06.874376 1160645 system_pods.go:61] "coredns-5dd5756b68-jvd5k" [79cf7d44-3393-4acc-9a89-8c2696428c1f] Running
	I0214 03:02:06.874381 1160645 system_pods.go:61] "etcd-functional-991896" [9bfbc3db-6fd3-4e20-94e4-d07ff42c82f1] Running
	I0214 03:02:06.874385 1160645 system_pods.go:61] "kindnet-mh6zx" [d461098d-546c-422d-900a-eaa6fe79164a] Running
	I0214 03:02:06.874390 1160645 system_pods.go:61] "kube-apiserver-functional-991896" [ae520740-8862-4bff-9b06-2457c835adfc] Running
	I0214 03:02:06.874394 1160645 system_pods.go:61] "kube-controller-manager-functional-991896" [0ea10820-57e4-4fcc-aad1-2fc01345a4af] Running
	I0214 03:02:06.874401 1160645 system_pods.go:61] "kube-proxy-kd7sf" [309a145a-a578-407d-93ac-e7b34f958c71] Running
	I0214 03:02:06.874405 1160645 system_pods.go:61] "kube-scheduler-functional-991896" [51a222c9-61be-4c3c-80c1-8abee69a962e] Running
	I0214 03:02:06.874411 1160645 system_pods.go:61] "storage-provisioner" [3c8003a7-b2ec-4b9f-976e-b4eb23488340] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 03:02:06.874418 1160645 system_pods.go:74] duration metric: took 8.516635ms to wait for pod list to return data ...
	I0214 03:02:06.874426 1160645 node_conditions.go:102] verifying NodePressure condition ...
	I0214 03:02:06.877783 1160645 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 03:02:06.877802 1160645 node_conditions.go:123] node cpu capacity is 2
	I0214 03:02:06.877812 1160645 node_conditions.go:105] duration metric: took 3.381913ms to run NodePressure ...
	I0214 03:02:06.877828 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:07.094421 1160645 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0214 03:02:07.099525 1160645 retry.go:31] will retry after 148.268108ms: kubelet not initialised
	I0214 03:02:07.253836 1160645 retry.go:31] will retry after 229.785569ms: kubelet not initialised
	I0214 03:02:07.489801 1160645 kubeadm.go:787] kubelet initialised
	I0214 03:02:07.489811 1160645 kubeadm.go:788] duration metric: took 395.376995ms waiting for restarted kubelet to initialise ...
	I0214 03:02:07.489819 1160645 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:02:07.506775 1160645 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:07.515939 1160645 pod_ready.go:97] node "functional-991896" hosting pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.515954 1160645 pod_ready.go:81] duration metric: took 9.16208ms waiting for pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:07.515963 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.515989 1160645 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-991896" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:07.523249 1160645 pod_ready.go:97] node "functional-991896" hosting pod "etcd-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.523264 1160645 pod_ready.go:81] duration metric: took 7.261953ms waiting for pod "etcd-functional-991896" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:07.523273 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "etcd-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.523299 1160645 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-991896" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:07.531141 1160645 pod_ready.go:97] node "functional-991896" hosting pod "kube-apiserver-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.531156 1160645 pod_ready.go:81] duration metric: took 7.849554ms waiting for pod "kube-apiserver-functional-991896" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:07.531165 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-apiserver-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.531186 1160645 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-991896" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:07.670837 1160645 pod_ready.go:97] node "functional-991896" hosting pod "kube-controller-manager-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.670852 1160645 pod_ready.go:81] duration metric: took 139.657565ms waiting for pod "kube-controller-manager-functional-991896" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:07.670862 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-controller-manager-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.670884 1160645 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kd7sf" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:08.069960 1160645 pod_ready.go:97] node "functional-991896" hosting pod "kube-proxy-kd7sf" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:08.069980 1160645 pod_ready.go:81] duration metric: took 399.085537ms waiting for pod "kube-proxy-kd7sf" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:08.069990 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-proxy-kd7sf" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:08.070010 1160645 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-991896" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:08.466790 1160645 pod_ready.go:97] node "functional-991896" hosting pod "kube-scheduler-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	I0214 03:02:08.466808 1160645 pod_ready.go:81] duration metric: took 396.786907ms waiting for pod "kube-scheduler-functional-991896" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:08.466818 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-scheduler-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	I0214 03:02:08.466844 1160645 pod_ready.go:38] duration metric: took 977.015813ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:02:08.466860 1160645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0214 03:02:08.477618 1160645 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0214 03:02:08.477631 1160645 kubeadm.go:640] restartCluster took 12.535065524s
	I0214 03:02:08.477639 1160645 kubeadm.go:406] StartCluster complete in 12.616272602s
	I0214 03:02:08.477662 1160645 settings.go:142] acquiring lock: {Name:mkcc971fda27c724b3c1908f1b3da87aea10d784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:02:08.477716 1160645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:02:08.478450 1160645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/kubeconfig: {Name:mkc9d4ef83ac02b186254a828f8611428408dff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:02:08.478741 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 03:02:08.479002 1160645 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:02:08.479036 1160645 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 03:02:08.479092 1160645 addons.go:69] Setting storage-provisioner=true in profile "functional-991896"
	I0214 03:02:08.479105 1160645 addons.go:234] Setting addon storage-provisioner=true in "functional-991896"
	W0214 03:02:08.479110 1160645 addons.go:243] addon storage-provisioner should already be in state true
	I0214 03:02:08.479152 1160645 host.go:66] Checking if "functional-991896" exists ...
	I0214 03:02:08.479851 1160645 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	I0214 03:02:08.480356 1160645 addons.go:69] Setting default-storageclass=true in profile "functional-991896"
	I0214 03:02:08.480370 1160645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-991896"
	I0214 03:02:08.480671 1160645 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	W0214 03:02:08.481874 1160645 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-991896" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0214 03:02:08.481895 1160645 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0214 03:02:08.481962 1160645 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0214 03:02:08.486626 1160645 out.go:177] * Verifying Kubernetes components...
	I0214 03:02:08.489141 1160645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:02:08.516178 1160645 addons.go:234] Setting addon default-storageclass=true in "functional-991896"
	W0214 03:02:08.516189 1160645 addons.go:243] addon default-storageclass should already be in state true
	I0214 03:02:08.516210 1160645 host.go:66] Checking if "functional-991896" exists ...
	I0214 03:02:08.516661 1160645 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	I0214 03:02:08.578309 1160645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:02:08.580245 1160645 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 03:02:08.580257 1160645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 03:02:08.580328 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:02:08.610114 1160645 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 03:02:08.610126 1160645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 03:02:08.610204 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:02:08.627131 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:02:08.663219 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	E0214 03:02:08.683135 1160645 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0214 03:02:08.683154 1160645 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0214 03:02:08.683169 1160645 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0214 03:02:08.683234 1160645 node_ready.go:35] waiting up to 6m0s for node "functional-991896" to be "Ready" ...
	I0214 03:02:08.683560 1160645 node_ready.go:53] error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	I0214 03:02:08.683569 1160645 node_ready.go:38] duration metric: took 321.911µs waiting for node "functional-991896" to be "Ready" ...
	I0214 03:02:08.686911 1160645 out.go:177] 
	W0214 03:02:08.688680 1160645 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	W0214 03:02:08.688712 1160645 out.go:239] * 
	W0214 03:02:08.689729 1160645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 03:02:08.692239 1160645 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	51a6097d3e58d       04b4c447bb9d4       5 seconds ago        Running             kube-apiserver            1                   74a1b36f1f009       kube-apiserver-functional-991896
	d897622bddb54       ba04bb24b9575       6 seconds ago        Running             storage-provisioner       2                   06a40cca1b2a5       storage-provisioner
	d2d46b9852787       04b4eaa3d3db8       6 seconds ago        Running             kindnet-cni               1                   6a294c551d311       kindnet-mh6zx
	252459b15a307       97e04611ad434       6 seconds ago        Running             coredns                   1                   0bf28a38358cb       coredns-5dd5756b68-jvd5k
	6a61b69672281       3ca3ca488cf13       6 seconds ago        Running             kube-proxy                1                   dea25cb9f808e       kube-proxy-kd7sf
	2e0ef0e2fb337       04b4c447bb9d4       6 seconds ago        Exited              kube-apiserver            0                   74a1b36f1f009       kube-apiserver-functional-991896
	e6001396bdabd       ba04bb24b9575       23 seconds ago       Exited              storage-provisioner       1                   06a40cca1b2a5       storage-provisioner
	307767b829b18       97e04611ad434       39 seconds ago       Exited              coredns                   0                   0bf28a38358cb       coredns-5dd5756b68-jvd5k
	0820611a83e7b       04b4eaa3d3db8       53 seconds ago       Exited              kindnet-cni               0                   6a294c551d311       kindnet-mh6zx
	4f3111ac490b8       3ca3ca488cf13       53 seconds ago       Exited              kube-proxy                0                   dea25cb9f808e       kube-proxy-kd7sf
	a565e51d088ac       05c284c929889       About a minute ago   Running             kube-scheduler            0                   f2d510e64e146       kube-scheduler-functional-991896
	e08be804407a0       9961cbceaf234       About a minute ago   Running             kube-controller-manager   0                   5f2562da05d0f       kube-controller-manager-functional-991896
	b384015744a84       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   d929a556f2a64       etcd-functional-991896
	
	
	==> containerd <==
	Feb 14 03:02:07 functional-991896 containerd[3185]: time="2024-02-14T03:02:07.871320119Z" level=info msg="cleaning up dead shim"
	Feb 14 03:02:07 functional-991896 containerd[3185]: time="2024-02-14T03:02:07.885135967Z" level=info msg="StartContainer for \"d2d46b9852787a2d2b5ad2969283d32e39e8195de40ac64c74f9cc6ba11c6f44\" returns successfully"
	Feb 14 03:02:07 functional-991896 containerd[3185]: time="2024-02-14T03:02:07.891338735Z" level=warning msg="cleanup warnings time=\"2024-02-14T03:02:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3835 runtime=io.containerd.runc.v2\n"
	Feb 14 03:02:07 functional-991896 containerd[3185]: time="2024-02-14T03:02:07.937727728Z" level=info msg="StartContainer for \"d897622bddb5428b6275a2991703a026f757508beb3ee8ec6e5e3d1d7d187e61\" returns successfully"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.348117288Z" level=info msg="StopContainer for \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" with timeout 2 (s)"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.352739959Z" level=info msg="Stop container \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" with signal terminated"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.408245963Z" level=info msg="shim disconnected" id=301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.408500183Z" level=warning msg="cleaning up after shim disconnected" id=301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04 namespace=k8s.io
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.408605961Z" level=info msg="cleaning up dead shim"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.420064120Z" level=warning msg="cleanup warnings time=\"2024-02-14T03:02:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3975 runtime=io.containerd.runc.v2\n"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.554291300Z" level=info msg="CreateContainer within sandbox \"74a1b36f1f009b558ea3d00b90c134bfbeeaed1ec6962dad1195ab6dff9f397a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.651965968Z" level=info msg="CreateContainer within sandbox \"74a1b36f1f009b558ea3d00b90c134bfbeeaed1ec6962dad1195ab6dff9f397a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9\""
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.654206442Z" level=info msg="StartContainer for \"51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9\""
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.658103671Z" level=info msg="shim disconnected" id=28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.658332087Z" level=warning msg="cleaning up after shim disconnected" id=28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c namespace=k8s.io
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.658368451Z" level=info msg="cleaning up dead shim"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.685955152Z" level=warning msg="cleanup warnings time=\"2024-02-14T03:02:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4054 runtime=io.containerd.runc.v2\n"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.690005213Z" level=info msg="StopContainer for \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" returns successfully"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.691439739Z" level=info msg="StopPodSandbox for \"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04\""
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.693333294Z" level=info msg="Container to stop \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.702191580Z" level=info msg="TearDown network for sandbox \"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04\" successfully"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.702342739Z" level=info msg="StopPodSandbox for \"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04\" returns successfully"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.779414398Z" level=info msg="StartContainer for \"51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9\" returns successfully"
	Feb 14 03:02:09 functional-991896 containerd[3185]: time="2024-02-14T03:02:09.545337320Z" level=info msg="RemoveContainer for \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\""
	Feb 14 03:02:09 functional-991896 containerd[3185]: time="2024-02-14T03:02:09.562810921Z" level=info msg="RemoveContainer for \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" returns successfully"
	
	
	==> coredns [252459b15a307abc1e89512a3fa3dfdd24455b22928ea22fc0cb3c5a5adace30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] 127.0.0.1:50264 - 44066 "HINFO IN 2676038425055356850.1973071159548491042. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048759318s
	
	
	==> coredns [307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49378 - 9032 "HINFO IN 9192085943000834440.4140252530550617898. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023404045s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-991896
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-991896
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40f210e92693e4612e04be0697de06db21ac5cf0
	                    minikube.k8s.io/name=functional-991896
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T03_01_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 03:01:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-991896
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 03:02:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 03:02:06 +0000   Wed, 14 Feb 2024 03:00:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 03:02:06 +0000   Wed, 14 Feb 2024 03:00:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 03:02:06 +0000   Wed, 14 Feb 2024 03:00:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 14 Feb 2024 03:02:06 +0000   Wed, 14 Feb 2024 03:02:06 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-991896
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c459f69322ab46b9a77a19277aba5e03
	  System UUID:                b35d1f76-a222-47f8-8c90-bbc2bdc29ed3
	  Boot ID:                    b6f8a130-5377-4a84-9795-3edbfc6d2fc5
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jvd5k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     56s
	  kube-system                 etcd-functional-991896                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kindnet-mh6zx                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      56s
	  kube-system                 kube-apiserver-functional-991896             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kube-system                 kube-controller-manager-functional-991896    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-proxy-kd7sf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-functional-991896             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node functional-991896 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node functional-991896 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node functional-991896 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s                kubelet          Node functional-991896 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s                kubelet          Node functional-991896 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s                kubelet          Node functional-991896 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             69s                kubelet          Node functional-991896 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                58s                kubelet          Node functional-991896 status is now: NodeReady
	  Normal  RegisteredNode           56s                node-controller  Node functional-991896 event: Registered Node functional-991896 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node functional-991896 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node functional-991896 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node functional-991896 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             8s                 kubelet          Node functional-991896 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.001133] FS-Cache: O-key=[8] '2bd5c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001075] FS-Cache: N-key=[8] '2bd5c90000000000'
	[  +0.002828] FS-Cache: Duplicate cookie detected
	[  +0.000708] FS-Cache: O-cookie c=0000003b [p=00000039 fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=0000000076fc1031
	[  +0.001081] FS-Cache: O-key=[8] '2bd5c90000000000'
	[  +0.000709] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000005e2f857b
	[  +0.001050] FS-Cache: N-key=[8] '2bd5c90000000000'
	[  +2.757072] FS-Cache: Duplicate cookie detected
	[  +0.000789] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000994] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=0000000073828904
	[  +0.001121] FS-Cache: O-key=[8] '2ad5c90000000000'
	[  +0.000813] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001101] FS-Cache: N-key=[8] '2ad5c90000000000'
	[  +0.290556] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000975] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=00000000eab8090b
	[  +0.001047] FS-Cache: O-key=[8] '30d5c90000000000'
	[  +0.000761] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=00000000bc792bf3
	[  +0.001026] FS-Cache: N-key=[8] '30d5c90000000000'
	
	
	==> etcd [b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3] <==
	{"level":"info","ts":"2024-02-14T03:00:58.756506Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:00:58.756549Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:00:58.75656Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:00:58.756846Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T03:00:58.756864Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T03:00:58.75739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-14T03:00:58.757465Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-14T03:00:59.435597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-14T03:00:59.435649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-14T03:00:59.435665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-02-14T03:00:59.435689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:59.435732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:59.435764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:59.435795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:59.439684Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-991896 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T03:00:59.439841Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:00:59.440928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-14T03:00:59.44759Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:59.447871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:00:59.45582Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:59.458708Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:59.458874Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:59.462956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T03:00:59.465966Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T03:00:59.468002Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:02:14 up  5:44,  0 users,  load average: 2.00, 1.87, 1.88
	Linux functional-991896 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f] <==
	I0214 03:01:21.211906       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 03:01:21.212165       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0214 03:01:21.212388       1 main.go:116] setting mtu 1500 for CNI 
	I0214 03:01:21.212479       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 03:01:21.212585       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 03:01:21.506870       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:01:21.507181       1 main.go:227] handling current node
	I0214 03:01:31.523577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:01:31.523605       1 main.go:227] handling current node
	I0214 03:01:41.536605       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:01:41.536647       1 main.go:227] handling current node
	I0214 03:01:51.541484       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:01:51.541525       1 main.go:227] handling current node
	
	
	==> kindnet [d2d46b9852787a2d2b5ad2969283d32e39e8195de40ac64c74f9cc6ba11c6f44] <==
	I0214 03:02:07.914981       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 03:02:07.915236       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0214 03:02:07.915637       1 main.go:116] setting mtu 1500 for CNI 
	I0214 03:02:07.915780       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 03:02:07.915885       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 03:02:08.303849       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:02:08.303889       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2e0ef0e2fb337ae9b049785c43a6b6c91df3123b8022702bc044cc700a168e34] <==
	I0214 03:02:07.725280       1 options.go:220] external host was not specified, using 192.168.49.2
	I0214 03:02:07.726333       1 server.go:148] Version: v1.28.4
	I0214 03:02:07.726365       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0214 03:02:07.726591       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9] <==
	I0214 03:02:12.182243       1 controller.go:116] Starting legacy_token_tracking_controller
	I0214 03:02:12.408067       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0214 03:02:12.408702       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0214 03:02:12.408855       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0214 03:02:12.449899       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0214 03:02:12.450245       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0214 03:02:12.760290       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0214 03:02:12.771534       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0214 03:02:12.972348       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0214 03:02:12.972629       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 03:02:12.973413       1 shared_informer.go:318] Caches are synced for configmaps
	I0214 03:02:12.972532       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 03:02:12.982849       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0214 03:02:12.982879       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0214 03:02:12.986324       1 aggregator.go:166] initial CRD sync complete...
	I0214 03:02:12.987584       1 autoregister_controller.go:141] Starting autoregister controller
	I0214 03:02:12.987756       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 03:02:12.987859       1 cache.go:39] Caches are synced for autoregister controller
	I0214 03:02:12.994558       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0214 03:02:13.006238       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0214 03:02:13.008970       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0214 03:02:13.015850       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0214 03:02:13.018970       1 trace.go:236] Trace[1256381496]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6aaff72e-d44c-4b15-ba5f-2c3a87764171,client:192.168.49.2,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-991896,user-agent:kubelet/v1.28.4 (linux/arm64) kubernetes/bae2c62,verb:DELETE (14-Feb-2024 03:02:12.510) (total time: 508ms):
	Trace[1256381496]: [508.370281ms] [508.370281ms] END
	I0214 03:02:13.189548       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-controller-manager [e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a] <==
	E0214 03:02:12.733003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.NetworkPolicy: unknown (get networkpolicies.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50524->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733075       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:50862->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ClusterRole: unknown (get clusterroles.rbac.authorization.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50858->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: unknown (get runtimeclasses.node.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50842->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodTemplate: unknown (get podtemplates) - error from a previous attempt: read tcp 192.168.49.2:50830->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:50822->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:50806->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50802->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.734011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ServiceAccount: unknown (get serviceaccounts) - error from a previous attempt: read tcp 192.168.49.2:50800->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.734162       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: unknown - error from a previous attempt: read tcp 192.168.49.2:50698->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.735714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1beta3.PriorityLevelConfiguration: unknown (get prioritylevelconfigurations.flowcontrol.apiserver.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50774->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.735816       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50772->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.735969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Lease: unknown (get leases.coordination.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50756->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.736043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50752->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.736115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:50742->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.736269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50728->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.738875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:50708->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.738978       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps) - error from a previous attempt: read tcp 192.168.49.2:50686->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Ingress: unknown (get ingresses.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50684->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50666->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Deployment: unknown (get deployments.apps) - error from a previous attempt: read tcp 192.168.49.2:50566->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Role: unknown (get roles.rbac.authorization.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50724->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739403       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1beta3.FlowSchema: unknown (get flowschemas.flowcontrol.apiserver.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50622->192.168.49.2:8441: read: connection reset by peer
	I0214 03:02:13.700204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.015µs"
	I0214 03:02:14.303286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.592µs"
	
	
	==> kube-proxy [4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364] <==
	I0214 03:01:21.026971       1 server_others.go:69] "Using iptables proxy"
	I0214 03:01:21.053831       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0214 03:01:21.077490       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 03:01:21.079326       1 server_others.go:152] "Using iptables Proxier"
	I0214 03:01:21.079421       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 03:01:21.079852       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 03:01:21.079976       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 03:01:21.080412       1 server.go:846] "Version info" version="v1.28.4"
	I0214 03:01:21.080816       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 03:01:21.082424       1 config.go:188] "Starting service config controller"
	I0214 03:01:21.082816       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 03:01:21.082990       1 config.go:97] "Starting endpoint slice config controller"
	I0214 03:01:21.083080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 03:01:21.083968       1 config.go:315] "Starting node config controller"
	I0214 03:01:21.084079       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 03:01:21.183836       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0214 03:01:21.183942       1 shared_informer.go:318] Caches are synced for service config
	I0214 03:01:21.184210       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [6a61b69672281a143756e74f626edc7cd2d1fda0d86f58cf826e5cff82bb3e3b] <==
	I0214 03:02:07.903207       1 server_others.go:69] "Using iptables proxy"
	I0214 03:02:07.945190       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0214 03:02:08.036871       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 03:02:08.038956       1 server_others.go:152] "Using iptables Proxier"
	I0214 03:02:08.038999       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 03:02:08.039009       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 03:02:08.039069       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 03:02:08.039339       1 server.go:846] "Version info" version="v1.28.4"
	I0214 03:02:08.039359       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 03:02:08.040697       1 config.go:188] "Starting service config controller"
	I0214 03:02:08.040732       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 03:02:08.040812       1 config.go:97] "Starting endpoint slice config controller"
	I0214 03:02:08.040822       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 03:02:08.041345       1 config.go:315] "Starting node config controller"
	I0214 03:02:08.041361       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 03:02:08.141378       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0214 03:02:08.141383       1 shared_informer.go:318] Caches are synced for service config
	I0214 03:02:08.141457       1 shared_informer.go:318] Caches are synced for node config
	W0214 03:02:08.423932       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0214 03:02:08.423995       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0214 03:02:08.424019       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	
	
	==> kube-scheduler [a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6] <==
	E0214 03:01:03.400143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0214 03:01:03.400343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0214 03:01:03.400551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0214 03:01:03.400756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 03:01:03.400954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 03:01:03.400854       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0214 03:01:03.401213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0214 03:01:03.401377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 03:01:03.401504       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0214 03:01:04.482818       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0214 03:02:12.848342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:50398->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.849626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:50424->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.849907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:50446->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.850143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:50462->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.850445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:50440->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.850659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:50434->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.854790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50454->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.855602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:50384->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.855765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:50364->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.855920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50422->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.856070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50370->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.856230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50346->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.856381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:50350->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.856551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:50414->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.881259       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50248->192.168.49.2:8441: read: connection reset by peer
	
	
	==> kubelet <==
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.536157    3564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.540367    3564 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-991896" podUID="ae520740-8862-4bff-9b06-2457c835adfc"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.540859    3564 status_manager.go:853] "Failed to get status for pod" podUID="309a145a-a578-407d-93ac-e7b34f958c71" pod="kube-system/kube-proxy-kd7sf" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kd7sf\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: E0214 03:02:08.541039    3564 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-991896\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-991896"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.541166    3564 scope.go:117] "RemoveContainer" containerID="2e0ef0e2fb337ae9b049785c43a6b6c91df3123b8022702bc044cc700a168e34"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.541935    3564 status_manager.go:853] "Failed to get status for pod" podUID="79cf7d44-3393-4acc-9a89-8c2696428c1f" pod="kube-system/coredns-5dd5756b68-jvd5k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jvd5k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.542162    3564 status_manager.go:853] "Failed to get status for pod" podUID="379ddb1d57c8632e0c8c7b8af30cbaf4" pod="kube-system/kube-apiserver-functional-991896" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-991896\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.542319    3564 status_manager.go:853] "Failed to get status for pod" podUID="3c8003a7-b2ec-4b9f-976e-b4eb23488340" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.542618    3564 status_manager.go:853] "Failed to get status for pod" podUID="d461098d-546c-422d-900a-eaa6fe79164a" pod="kube-system/kindnet-mh6zx" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-mh6zx\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: E0214 03:02:08.546785    3564 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-991896.17b39b903cae033f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"500", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-991896", UID:"379ddb1d57c8632e0c8c7b8af30cbaf4", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Pulled", Message:"Container image \"registry.k8s.io/kube-apiserver:v1.28.4\" already present on ma
chine", Source:v1.EventSource{Component:"kubelet", Host:"functional-991896"}, FirstTimestamp:time.Date(2024, time.February, 14, 3, 2, 7, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 14, 3, 2, 8, 545625706, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-991896"}': 'Patch "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-991896.17b39b903cae033f": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.649456    3564 status_manager.go:853] "Failed to get status for pod" podUID="e8f785d6d77d9f3c8770b2490e72cd74" pod="kube-system/kube-controller-manager-functional-991896" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-991896\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.649867    3564 status_manager.go:853] "Failed to get status for pod" podUID="3c8003a7-b2ec-4b9f-976e-b4eb23488340" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.650118    3564 status_manager.go:853] "Failed to get status for pod" podUID="d461098d-546c-422d-900a-eaa6fe79164a" pod="kube-system/kindnet-mh6zx" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-mh6zx\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.650351    3564 status_manager.go:853] "Failed to get status for pod" podUID="309a145a-a578-407d-93ac-e7b34f958c71" pod="kube-system/kube-proxy-kd7sf" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kd7sf\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.650952    3564 status_manager.go:853] "Failed to get status for pod" podUID="79cf7d44-3393-4acc-9a89-8c2696428c1f" pod="kube-system/coredns-5dd5756b68-jvd5k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jvd5k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.651461    3564 status_manager.go:853] "Failed to get status for pod" podUID="379ddb1d57c8632e0c8c7b8af30cbaf4" pod="kube-system/kube-apiserver-functional-991896" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-991896\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:09 functional-991896 kubelet[3564]: I0214 03:02:09.542890    3564 scope.go:117] "RemoveContainer" containerID="28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c"
	Feb 14 03:02:09 functional-991896 kubelet[3564]: I0214 03:02:09.559076    3564 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-991896" podUID="ae520740-8862-4bff-9b06-2457c835adfc"
	Feb 14 03:02:10 functional-991896 kubelet[3564]: I0214 03:02:10.347533    3564 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1dbc4f3504298fd95e33ef4f99ee62f2" path="/var/lib/kubelet/pods/1dbc4f3504298fd95e33ef4f99ee62f2/volumes"
	Feb 14 03:02:12 functional-991896 kubelet[3564]: E0214 03:02:12.586333    3564 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50290->192.168.49.2:8441: read: connection reset by peer
	Feb 14 03:02:12 functional-991896 kubelet[3564]: E0214 03:02:12.600316    3564 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50276->192.168.49.2:8441: read: connection reset by peer
	Feb 14 03:02:12 functional-991896 kubelet[3564]: E0214 03:02:12.600477    3564 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50324->192.168.49.2:8441: read: connection reset by peer
	Feb 14 03:02:13 functional-991896 kubelet[3564]: I0214 03:02:13.019426    3564 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-991896"
	Feb 14 03:02:13 functional-991896 kubelet[3564]: I0214 03:02:13.567780    3564 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-991896" podUID="ae520740-8862-4bff-9b06-2457c835adfc"
	Feb 14 03:02:15 functional-991896 kubelet[3564]: I0214 03:02:15.024831    3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-991896" podStartSLOduration=2.024726059 podCreationTimestamp="2024-02-14 03:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 03:02:13.584190434 +0000 UTC m=+7.439209740" watchObservedRunningTime="2024-02-14 03:02:15.024726059 +0000 UTC m=+8.879745349"
	
	
	==> storage-provisioner [d897622bddb5428b6275a2991703a026f757508beb3ee8ec6e5e3d1d7d187e61] <==
	I0214 03:02:07.953094       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 03:02:08.000510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 03:02:08.000562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786] <==
	I0214 03:01:51.062958       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 03:01:51.094670       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 03:01:51.094741       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 03:01:51.105892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 03:01:51.106764       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-991896_a6a8af35-3a6a-48cc-af8f-ff9f46abfab3!
	I0214 03:01:51.106284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82706228-3b48-4dab-b5b4-5bb35f7a8242", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-991896_a6a8af35-3a6a-48cc-af8f-ff9f46abfab3 became leader
	I0214 03:01:51.207567       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-991896_a6a8af35-3a6a-48cc-af8f-ff9f46abfab3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-991896 -n functional-991896
helpers_test.go:261: (dbg) Run:  kubectl --context functional-991896 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ExtraConfig (23.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-991896 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2024-02-14 03:02:13 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0x400000dad0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0x40000b2000} Ready:false RestartCount:1 Image:registry.k8s.io/kube-apiserver:v1.28.4 ImageID:registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb ContainerID:containerd://51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9}]}
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-991896
helpers_test.go:235: (dbg) docker inspect functional-991896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738",
	        "Created": "2024-02-14T03:00:44.23705449Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1156915,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:00:44.557320955Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738/hostname",
	        "HostsPath": "/var/lib/docker/containers/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738/hosts",
	        "LogPath": "/var/lib/docker/containers/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738/d26660904130b8a5dfc26b500852a92a89f2c37b8d0f22710685827355e95738-json.log",
	        "Name": "/functional-991896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-991896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-991896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cfe668a3012eefc81675181b0604d21a4f24c834b18b63f4f28673af93542e5e-init/diff:/var/lib/docker/overlay2/2b57dacbb0185892ad2774651ca7e304a0e7ce49c55385fdb5828fd98438b35e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfe668a3012eefc81675181b0604d21a4f24c834b18b63f4f28673af93542e5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfe668a3012eefc81675181b0604d21a4f24c834b18b63f4f28673af93542e5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfe668a3012eefc81675181b0604d21a4f24c834b18b63f4f28673af93542e5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-991896",
	                "Source": "/var/lib/docker/volumes/functional-991896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-991896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-991896",
	                "name.minikube.sigs.k8s.io": "functional-991896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6162f4aaee34d63ba1a12e89259b5dea9a35e91978e41267ff63a574214e87c3",
	            "SandboxKey": "/var/run/docker/netns/6162f4aaee34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34047"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34043"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34045"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34044"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-991896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d26660904130",
	                        "functional-991896"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "cd73f64c59256848a6ae734282eb73b16b89679a5c149f9a0d8d2967e49ff9f2",
	                    "EndpointID": "443e9dcfb8391accd495bd72524aa18cfe6117b96990a88737a19f584b83cc18",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-991896",
	                        "d26660904130"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-991896 -n functional-991896
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 logs -n 25: (1.651748615s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-013257 --log_dir                                                  | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | /tmp/nospam-013257 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-013257                                                         | nospam-013257     | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	| start   | -p functional-991896                                                     | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:01 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-991896                                                     | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-991896 cache add                                              | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-991896 cache add                                              | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-991896 cache add                                              | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-991896 cache add                                              | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | minikube-local-cache-test:functional-991896                              |                   |         |         |                     |                     |
	| cache   | functional-991896 cache delete                                           | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | minikube-local-cache-test:functional-991896                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	| ssh     | functional-991896 ssh sudo                                               | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-991896                                                        | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-991896 ssh                                                    | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-991896 cache reload                                           | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	| ssh     | functional-991896 ssh                                                    | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-991896 kubectl --                                             | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | --context functional-991896                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-991896                                                     | functional-991896 | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 03:01:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 03:01:52.083370 1160645 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:01:52.083564 1160645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:01:52.083568 1160645 out.go:304] Setting ErrFile to fd 2...
	I0214 03:01:52.083573 1160645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:01:52.083918 1160645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:01:52.084348 1160645 out.go:298] Setting JSON to false
	I0214 03:01:52.085873 1160645 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20658,"bootTime":1707859054,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 03:01:52.085960 1160645 start.go:138] virtualization:  
	I0214 03:01:52.088831 1160645 out.go:177] * [functional-991896] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 03:01:52.091400 1160645 out.go:177]   - MINIKUBE_LOCATION=18166
	I0214 03:01:52.093261 1160645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:01:52.091596 1160645 notify.go:220] Checking for updates...
	I0214 03:01:52.097987 1160645 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:01:52.100119 1160645 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 03:01:52.102273 1160645 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:01:52.104356 1160645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:01:52.106953 1160645 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:01:52.107046 1160645 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:01:52.128444 1160645 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:01:52.128558 1160645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:01:52.207780 1160645 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:69 SystemTime:2024-02-14 03:01:52.198306693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:01:52.207872 1160645 docker.go:295] overlay module found
	I0214 03:01:52.209947 1160645 out.go:177] * Using the docker driver based on existing profile
	I0214 03:01:52.211653 1160645 start.go:298] selected driver: docker
	I0214 03:01:52.211662 1160645 start.go:902] validating driver "docker" against &{Name:functional-991896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:01:52.211745 1160645 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:01:52.211846 1160645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:01:52.285703 1160645 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:69 SystemTime:2024-02-14 03:01:52.276640232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:01:52.286125 1160645 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 03:01:52.286167 1160645 cni.go:84] Creating CNI manager for ""
	I0214 03:01:52.286175 1160645 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:01:52.286186 1160645 start_flags.go:321] config:
	{Name:functional-991896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:01:52.289608 1160645 out.go:177] * Starting control plane node functional-991896 in cluster functional-991896
	I0214 03:01:52.291629 1160645 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0214 03:01:52.293788 1160645 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 03:01:52.295763 1160645 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 03:01:52.295849 1160645 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 03:01:52.295841 1160645 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0214 03:01:52.295866 1160645 cache.go:56] Caching tarball of preloaded images
	I0214 03:01:52.296049 1160645 preload.go:174] Found /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0214 03:01:52.296062 1160645 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0214 03:01:52.296191 1160645 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/config.json ...
	I0214 03:01:52.311744 1160645 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 03:01:52.311759 1160645 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 03:01:52.311782 1160645 cache.go:194] Successfully downloaded all kic artifacts
	I0214 03:01:52.311818 1160645 start.go:365] acquiring machines lock for functional-991896: {Name:mk593e53724b0278df4a8322a2172870edf53457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 03:01:52.311895 1160645 start.go:369] acquired machines lock for "functional-991896" in 58.213µs
	I0214 03:01:52.311916 1160645 start.go:96] Skipping create...Using existing machine configuration
	I0214 03:01:52.311921 1160645 fix.go:54] fixHost starting: 
	I0214 03:01:52.312201 1160645 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	I0214 03:01:52.328082 1160645 fix.go:102] recreateIfNeeded on functional-991896: state=Running err=<nil>
	W0214 03:01:52.328108 1160645 fix.go:128] unexpected machine state, will restart: <nil>
	I0214 03:01:52.330287 1160645 out.go:177] * Updating the running docker "functional-991896" container ...
	I0214 03:01:52.332266 1160645 machine.go:88] provisioning docker machine ...
	I0214 03:01:52.332286 1160645 ubuntu.go:169] provisioning hostname "functional-991896"
	I0214 03:01:52.332357 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:52.349504 1160645 main.go:141] libmachine: Using SSH client type: native
	I0214 03:01:52.349934 1160645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0214 03:01:52.349945 1160645 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-991896 && echo "functional-991896" | sudo tee /etc/hostname
	I0214 03:01:52.501267 1160645 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-991896
	
	I0214 03:01:52.501348 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:52.520680 1160645 main.go:141] libmachine: Using SSH client type: native
	I0214 03:01:52.521221 1160645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0214 03:01:52.521244 1160645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-991896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-991896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-991896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 03:01:52.656536 1160645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 03:01:52.656552 1160645 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18166-1129740/.minikube CaCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18166-1129740/.minikube}
	I0214 03:01:52.656575 1160645 ubuntu.go:177] setting up certificates
	I0214 03:01:52.656583 1160645 provision.go:83] configureAuth start
	I0214 03:01:52.656650 1160645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-991896
	I0214 03:01:52.681850 1160645 provision.go:138] copyHostCerts
	I0214 03:01:52.681908 1160645 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem, removing ...
	I0214 03:01:52.681916 1160645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem
	I0214 03:01:52.681995 1160645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem (1082 bytes)
	I0214 03:01:52.682090 1160645 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem, removing ...
	I0214 03:01:52.682094 1160645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem
	I0214 03:01:52.682120 1160645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem (1123 bytes)
	I0214 03:01:52.682177 1160645 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem, removing ...
	I0214 03:01:52.682181 1160645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem
	I0214 03:01:52.682204 1160645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem (1675 bytes)
	I0214 03:01:52.682243 1160645 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem org=jenkins.functional-991896 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-991896]
	I0214 03:01:53.280022 1160645 provision.go:172] copyRemoteCerts
	I0214 03:01:53.280085 1160645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 03:01:53.280125 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.296717 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.392513 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0214 03:01:53.419618 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 03:01:53.444751 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 03:01:53.472216 1160645 provision.go:86] duration metric: configureAuth took 815.619987ms
	I0214 03:01:53.472245 1160645 ubuntu.go:193] setting minikube options for container-runtime
	I0214 03:01:53.472467 1160645 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:01:53.472475 1160645 machine.go:91] provisioned docker machine in 1.14019944s
	I0214 03:01:53.472482 1160645 start.go:300] post-start starting for "functional-991896" (driver="docker")
	I0214 03:01:53.472492 1160645 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 03:01:53.472542 1160645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 03:01:53.472580 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.489921 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.585090 1160645 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 03:01:53.588469 1160645 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 03:01:53.588496 1160645 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 03:01:53.588505 1160645 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 03:01:53.588512 1160645 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 03:01:53.588521 1160645 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/addons for local assets ...
	I0214 03:01:53.588582 1160645 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/files for local assets ...
	I0214 03:01:53.588668 1160645 filesync.go:149] local asset: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem -> 11350872.pem in /etc/ssl/certs
	I0214 03:01:53.588756 1160645 filesync.go:149] local asset: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/test/nested/copy/1135087/hosts -> hosts in /etc/test/nested/copy/1135087
	I0214 03:01:53.588800 1160645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1135087
	I0214 03:01:53.597903 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem --> /etc/ssl/certs/11350872.pem (1708 bytes)
	I0214 03:01:53.622706 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/test/nested/copy/1135087/hosts --> /etc/test/nested/copy/1135087/hosts (40 bytes)
	I0214 03:01:53.647947 1160645 start.go:303] post-start completed in 175.450645ms
	I0214 03:01:53.648019 1160645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:01:53.648071 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.665594 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.756579 1160645 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 03:01:53.761457 1160645 fix.go:56] fixHost completed within 1.449528128s
	I0214 03:01:53.761472 1160645 start.go:83] releasing machines lock for "functional-991896", held for 1.449569579s
	I0214 03:01:53.761571 1160645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-991896
	I0214 03:01:53.778518 1160645 ssh_runner.go:195] Run: cat /version.json
	I0214 03:01:53.778561 1160645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 03:01:53.778565 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.778662 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:01:53.807750 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.808448 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:01:53.903099 1160645 ssh_runner.go:195] Run: systemctl --version
	I0214 03:01:54.044926 1160645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 03:01:54.049547 1160645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0214 03:01:54.068118 1160645 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0214 03:01:54.068214 1160645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 03:01:54.077894 1160645 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0214 03:01:54.077909 1160645 start.go:475] detecting cgroup driver to use...
	I0214 03:01:54.077939 1160645 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 03:01:54.077987 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0214 03:01:54.091133 1160645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0214 03:01:54.103582 1160645 docker.go:217] disabling cri-docker service (if available) ...
	I0214 03:01:54.103651 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 03:01:54.118267 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 03:01:54.130311 1160645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 03:01:54.243377 1160645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 03:01:54.363262 1160645 docker.go:233] disabling docker service ...
	I0214 03:01:54.363335 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 03:01:54.376532 1160645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 03:01:54.388637 1160645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 03:01:54.497704 1160645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 03:01:54.614435 1160645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 03:01:54.629108 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 03:01:54.648623 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0214 03:01:54.659091 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0214 03:01:54.669120 1160645 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0214 03:01:54.669179 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0214 03:01:54.679356 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:01:54.689903 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0214 03:01:54.700106 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:01:54.710262 1160645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 03:01:54.719723 1160645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0214 03:01:54.730033 1160645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 03:01:54.738693 1160645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 03:01:54.746902 1160645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:01:54.849330 1160645 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 03:01:55.062851 1160645 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0214 03:01:55.062926 1160645 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0214 03:01:55.067246 1160645 start.go:543] Will wait 60s for crictl version
	I0214 03:01:55.067303 1160645 ssh_runner.go:195] Run: which crictl
	I0214 03:01:55.071302 1160645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 03:01:55.112631 1160645 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0214 03:01:55.112739 1160645 ssh_runner.go:195] Run: containerd --version
	I0214 03:01:55.148154 1160645 ssh_runner.go:195] Run: containerd --version
	I0214 03:01:55.181562 1160645 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0214 03:01:55.183674 1160645 cli_runner.go:164] Run: docker network inspect functional-991896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 03:01:55.199716 1160645 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 03:01:55.205640 1160645 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0214 03:01:55.207836 1160645 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 03:01:55.207919 1160645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 03:01:55.244557 1160645 containerd.go:612] all images are preloaded for containerd runtime.
	I0214 03:01:55.244569 1160645 containerd.go:519] Images already preloaded, skipping extraction
	I0214 03:01:55.244631 1160645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 03:01:55.292781 1160645 containerd.go:612] all images are preloaded for containerd runtime.
	I0214 03:01:55.292794 1160645 cache_images.go:84] Images are preloaded, skipping loading
	I0214 03:01:55.292863 1160645 ssh_runner.go:195] Run: sudo crictl info
	I0214 03:01:55.329890 1160645 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0214 03:01:55.329913 1160645 cni.go:84] Creating CNI manager for ""
	I0214 03:01:55.329921 1160645 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:01:55.329931 1160645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 03:01:55.329951 1160645 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-991896 NodeName:functional-991896 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 03:01:55.330073 1160645 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-991896"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 03:01:55.330141 1160645 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-991896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0214 03:01:55.330215 1160645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0214 03:01:55.339335 1160645 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 03:01:55.339400 1160645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 03:01:55.348243 1160645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0214 03:01:55.366814 1160645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 03:01:55.384635 1160645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0214 03:01:55.404750 1160645 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 03:01:55.408397 1160645 certs.go:56] Setting up /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896 for IP: 192.168.49.2
	I0214 03:01:55.408419 1160645 certs.go:190] acquiring lock for shared ca certs: {Name:mk121f32762802a204d98d3cbcae9456442a0756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:01:55.408573 1160645 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key
	I0214 03:01:55.408633 1160645 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key
	I0214 03:01:55.408709 1160645 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.key
	I0214 03:01:55.408752 1160645 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/apiserver.key.dd3b5fb2
	I0214 03:01:55.408791 1160645 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/proxy-client.key
	I0214 03:01:55.408909 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087.pem (1338 bytes)
	W0214 03:01:55.408937 1160645 certs.go:433] ignoring /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087_empty.pem, impossibly tiny 0 bytes
	I0214 03:01:55.408946 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 03:01:55.408971 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem (1082 bytes)
	I0214 03:01:55.408992 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem (1123 bytes)
	I0214 03:01:55.409019 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem (1675 bytes)
	I0214 03:01:55.409064 1160645 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem (1708 bytes)
	I0214 03:01:55.409768 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 03:01:55.435883 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 03:01:55.466945 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 03:01:55.493522 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 03:01:55.518617 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 03:01:55.545999 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 03:01:55.572271 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 03:01:55.598777 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 03:01:55.624603 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem --> /usr/share/ca-certificates/11350872.pem (1708 bytes)
	I0214 03:01:55.649804 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 03:01:55.674633 1160645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087.pem --> /usr/share/ca-certificates/1135087.pem (1338 bytes)
	I0214 03:01:55.700144 1160645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 03:01:55.719367 1160645 ssh_runner.go:195] Run: openssl version
	I0214 03:01:55.725630 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 03:01:55.735914 1160645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:01:55.739276 1160645 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:55 /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:01:55.739330 1160645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:01:55.746526 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 03:01:55.755807 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1135087.pem && ln -fs /usr/share/ca-certificates/1135087.pem /etc/ssl/certs/1135087.pem"
	I0214 03:01:55.765290 1160645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1135087.pem
	I0214 03:01:55.768738 1160645 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 03:00 /usr/share/ca-certificates/1135087.pem
	I0214 03:01:55.768794 1160645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1135087.pem
	I0214 03:01:55.775800 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1135087.pem /etc/ssl/certs/51391683.0"
	I0214 03:01:55.784745 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11350872.pem && ln -fs /usr/share/ca-certificates/11350872.pem /etc/ssl/certs/11350872.pem"
	I0214 03:01:55.794186 1160645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11350872.pem
	I0214 03:01:55.797751 1160645 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 03:00 /usr/share/ca-certificates/11350872.pem
	I0214 03:01:55.797808 1160645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11350872.pem
	I0214 03:01:55.805681 1160645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11350872.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 03:01:55.815012 1160645 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 03:01:55.818461 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0214 03:01:55.825220 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0214 03:01:55.832494 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0214 03:01:55.839551 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0214 03:01:55.846516 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0214 03:01:55.854046 1160645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0214 03:01:55.861374 1160645 kubeadm.go:404] StartCluster: {Name:functional-991896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:01:55.861455 1160645 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0214 03:01:55.861532 1160645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 03:01:55.899234 1160645 cri.go:89] found id: "e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786"
	I0214 03:01:55.899247 1160645 cri.go:89] found id: "307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832"
	I0214 03:01:55.899252 1160645 cri.go:89] found id: "0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f"
	I0214 03:01:55.899256 1160645 cri.go:89] found id: "4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364"
	I0214 03:01:55.899259 1160645 cri.go:89] found id: "c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6"
	I0214 03:01:55.899263 1160645 cri.go:89] found id: "a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6"
	I0214 03:01:55.899267 1160645 cri.go:89] found id: "28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c"
	I0214 03:01:55.899270 1160645 cri.go:89] found id: "e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a"
	I0214 03:01:55.899274 1160645 cri.go:89] found id: "b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3"
	I0214 03:01:55.899287 1160645 cri.go:89] found id: ""
	I0214 03:01:55.899337 1160645 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0214 03:01:55.932616 1160645 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa","pid":1654,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa/rootfs","created":"2024-02-14T03:01:20.359446991Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_3c8003a7-b2ec-4b9f-976e-b4eb23488340","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cr
i.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3c8003a7-b2ec-4b9f-976e-b4eb23488340"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f","pid":1871,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f/rootfs","created":"2024-02-14T03:01:21.108360966Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10","io.kubernetes.cri.sandbox-name":"kindnet-mh6zx","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d461098d-546c-422d-900a-eaa6fe79164a"},"own
er":"root"},{"ociVersion":"1.0.2-dev","id":"0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8","pid":2093,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8/rootfs","created":"2024-02-14T03:01:34.884131219Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-jvd5k_79cf7d44-3393-4acc-9a89-8c2696428c1f","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-jvd5k","io.kubernetes.cri.sandbox-namespace":"kube-
system","io.kubernetes.cri.sandbox-uid":"79cf7d44-3393-4acc-9a89-8c2696428c1f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c","pid":1305,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c/rootfs","created":"2024-02-14T03:00:58.739381517Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri.sandbox-id":"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1dbc4f3504298fd95e33ef4f99ee62f2"},"owner":"root"},{"oc
iVersion":"1.0.2-dev","id":"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04","pid":1176,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04/rootfs","created":"2024-02-14T03:00:58.545605517Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-991896_1dbc4f3504298fd95e33ef4f99ee62f2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.
kubernetes.cri.sandbox-uid":"1dbc4f3504298fd95e33ef4f99ee62f2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832","pid":2123,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832/rootfs","created":"2024-02-14T03:01:34.964141072Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-jvd5k","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"79cf7d44-3393-4acc-9a89-8c2696428c1f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id"
:"4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364","pid":1805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364/rootfs","created":"2024-02-14T03:01:20.934778227Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri.sandbox-id":"dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a","io.kubernetes.cri.sandbox-name":"kube-proxy-kd7sf","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"309a145a-a578-407d-93ac-e7b34f958c71"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781","pid":1150,"status":"running","bundle":"/run/contain
erd/io.containerd.runtime.v2.task/k8s.io/5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781/rootfs","created":"2024-02-14T03:00:58.500044827Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-991896_e8f785d6d77d9f3c8770b2490e72cd74","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e8f785d6d77d9f3c8770b2490e72cd74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a294c551d311a63
55104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10","pid":1738,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10/rootfs","created":"2024-02-14T03:01:20.808052485Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mh6zx_d461098d-546c-422d-900a-eaa6fe79164a","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mh6zx","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d461098d-546c-422d-900a-eaa6fe79164a"}
,"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6","pid":1337,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6/rootfs","created":"2024-02-14T03:00:58.810572449Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri.sandbox-id":"f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d73044a0de4a1a0c1234a6cffddf6a7b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894f
c985b9008dac3","pid":1238,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3/rootfs","created":"2024-02-14T03:00:58.631247363Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722","io.kubernetes.cri.sandbox-name":"etcd-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"815f2ec0a361159dadd056561a46fc5c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722","pid":1114,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d929a556f2a64645f85c3e
048773ec01fd8f6af8143dfb8818b99b9e4d3e1722","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722/rootfs","created":"2024-02-14T03:00:58.483431403Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-991896_815f2ec0a361159dadd056561a46fc5c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"815f2ec0a361159dadd056561a46fc5c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a","pid":1776,"status":"running","bundle":"/run/contai
nerd/io.containerd.runtime.v2.task/k8s.io/dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a/rootfs","created":"2024-02-14T03:01:20.815759878Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-kd7sf_309a145a-a578-407d-93ac-e7b34f958c71","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-kd7sf","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"309a145a-a578-407d-93ac-e7b34f958c71"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a64
9fb2a","pid":1271,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a/rootfs","created":"2024-02-14T03:00:58.690401808Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri.sandbox-id":"5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e8f785d6d77d9f3c8770b2490e72cd74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786","pid":2907,"status":"running","bundle":"/run/containerd/io.contain
erd.runtime.v2.task/k8s.io/e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786/rootfs","created":"2024-02-14T03:01:51.036272833Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3c8003a7-b2ec-4b9f-976e-b4eb23488340"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154","pid":1194,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154","rootfs
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154/rootfs","created":"2024-02-14T03:00:58.569198989Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-991896_d73044a0de4a1a0c1234a6cffddf6a7b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-991896","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d73044a0de4a1a0c1234a6cffddf6a7b"},"owner":"root"}]
	I0214 03:01:55.932901 1160645 cri.go:126] list returned 16 containers
	I0214 03:01:55.932909 1160645 cri.go:129] container: {ID:06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa Status:running}
	I0214 03:01:55.932924 1160645 cri.go:131] skipping 06a40cca1b2a50cd5f86a5e365ca3b84051d592249b8c8235e094fd3572013aa - not in ps
	I0214 03:01:55.932929 1160645 cri.go:129] container: {ID:0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f Status:running}
	I0214 03:01:55.932934 1160645 cri.go:135] skipping {0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f running}: state = "running", want "paused"
	I0214 03:01:55.932942 1160645 cri.go:129] container: {ID:0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8 Status:running}
	I0214 03:01:55.932947 1160645 cri.go:131] skipping 0bf28a38358cbff58a674586c515b067a14f0e8a2bc7b72872bad3df0f77b5f8 - not in ps
	I0214 03:01:55.932952 1160645 cri.go:129] container: {ID:28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c Status:running}
	I0214 03:01:55.932957 1160645 cri.go:135] skipping {28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c running}: state = "running", want "paused"
	I0214 03:01:55.932963 1160645 cri.go:129] container: {ID:301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04 Status:running}
	I0214 03:01:55.932968 1160645 cri.go:131] skipping 301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04 - not in ps
	I0214 03:01:55.932972 1160645 cri.go:129] container: {ID:307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 Status:running}
	I0214 03:01:55.932978 1160645 cri.go:135] skipping {307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 running}: state = "running", want "paused"
	I0214 03:01:55.932983 1160645 cri.go:129] container: {ID:4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 Status:running}
	I0214 03:01:55.932989 1160645 cri.go:135] skipping {4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 running}: state = "running", want "paused"
	I0214 03:01:55.932993 1160645 cri.go:129] container: {ID:5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781 Status:running}
	I0214 03:01:55.933002 1160645 cri.go:131] skipping 5f2562da05d0f0116274c5efd52addec970e9f68ec36fd3d7d7a6bb75c64b781 - not in ps
	I0214 03:01:55.933006 1160645 cri.go:129] container: {ID:6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10 Status:running}
	I0214 03:01:55.933014 1160645 cri.go:131] skipping 6a294c551d311a6355104715fe4df2f66e6eabda12daa616a1d5b524a1cb0c10 - not in ps
	I0214 03:01:55.933019 1160645 cri.go:129] container: {ID:a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 Status:running}
	I0214 03:01:55.933024 1160645 cri.go:135] skipping {a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 running}: state = "running", want "paused"
	I0214 03:01:55.933029 1160645 cri.go:129] container: {ID:b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3 Status:running}
	I0214 03:01:55.933035 1160645 cri.go:135] skipping {b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3 running}: state = "running", want "paused"
	I0214 03:01:55.933040 1160645 cri.go:129] container: {ID:d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722 Status:running}
	I0214 03:01:55.933045 1160645 cri.go:131] skipping d929a556f2a64645f85c3e048773ec01fd8f6af8143dfb8818b99b9e4d3e1722 - not in ps
	I0214 03:01:55.933049 1160645 cri.go:129] container: {ID:dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a Status:running}
	I0214 03:01:55.933054 1160645 cri.go:131] skipping dea25cb9f808eb00899920d77f0ed2479fa5dcfffe6497e3acd7ba12e10d084a - not in ps
	I0214 03:01:55.933058 1160645 cri.go:129] container: {ID:e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a Status:running}
	I0214 03:01:55.933064 1160645 cri.go:135] skipping {e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a running}: state = "running", want "paused"
	I0214 03:01:55.933069 1160645 cri.go:129] container: {ID:e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 Status:running}
	I0214 03:01:55.933074 1160645 cri.go:135] skipping {e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 running}: state = "running", want "paused"
	I0214 03:01:55.933079 1160645 cri.go:129] container: {ID:f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154 Status:running}
	I0214 03:01:55.933085 1160645 cri.go:131] skipping f2d510e64e146c4688110387e61a19f10318e044f47a2680484a88810eb65154 - not in ps
	I0214 03:01:55.933150 1160645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 03:01:55.942549 1160645 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0214 03:01:55.942560 1160645 kubeadm.go:636] restartCluster start
	I0214 03:01:55.942618 1160645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0214 03:01:55.951158 1160645 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0214 03:01:55.951713 1160645 kubeconfig.go:92] found "functional-991896" server: "https://192.168.49.2:8441"
	I0214 03:01:55.953081 1160645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0214 03:01:55.962071 1160645 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-02-14 03:00:49.859599608 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-02-14 03:01:55.397711128 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0214 03:01:55.962081 1160645 kubeadm.go:1135] stopping kube-system containers ...
	I0214 03:01:55.962094 1160645 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0214 03:01:55.962154 1160645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 03:01:56.006593 1160645 cri.go:89] found id: "e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786"
	I0214 03:01:56.006611 1160645 cri.go:89] found id: "307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832"
	I0214 03:01:56.006616 1160645 cri.go:89] found id: "0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f"
	I0214 03:01:56.006619 1160645 cri.go:89] found id: "4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364"
	I0214 03:01:56.006623 1160645 cri.go:89] found id: "c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6"
	I0214 03:01:56.006626 1160645 cri.go:89] found id: "a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6"
	I0214 03:01:56.006629 1160645 cri.go:89] found id: "28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c"
	I0214 03:01:56.006633 1160645 cri.go:89] found id: "e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a"
	I0214 03:01:56.006636 1160645 cri.go:89] found id: "b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3"
	I0214 03:01:56.006643 1160645 cri.go:89] found id: ""
	I0214 03:01:56.006647 1160645 cri.go:234] Stopping containers: [e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f 4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6 a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3]
	I0214 03:01:56.006727 1160645 ssh_runner.go:195] Run: which crictl
	I0214 03:01:56.011579 1160645 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f 4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6 a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3
	I0214 03:02:01.252643 1160645 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f 4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6 a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3: (5.241021437s)
	W0214 03:02:01.252702 1160645 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786 307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832 0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f 4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364 c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6 a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6 28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3: Process exited with status 1
	stdout:
	e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786
	307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832
	0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f
	4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364
	
	stderr:
	E0214 03:02:01.249589    3378 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6\": not found" containerID="c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6"
	time="2024-02-14T03:02:01Z" level=fatal msg="stopping the container \"c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c37d46865c4cc619b3e5b4ac2c51b672875921579c4939998faa3abba87f46c6\": not found"
	I0214 03:02:01.252763 1160645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0214 03:02:01.312005 1160645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 03:02:01.321346 1160645 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 14 03:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 14 03:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 14 03:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 03:00 /etc/kubernetes/scheduler.conf
	
	I0214 03:02:01.321402 1160645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0214 03:02:01.330748 1160645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0214 03:02:01.339868 1160645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0214 03:02:01.351063 1160645 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 03:02:01.351120 1160645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 03:02:01.360311 1160645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0214 03:02:01.369318 1160645 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0214 03:02:01.369373 1160645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 03:02:01.377918 1160645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 03:02:01.387250 1160645 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0214 03:02:01.387264 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:01.447255 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:05.951376 1160645 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.504092911s)
	I0214 03:02:05.951399 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:06.148025 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:06.241669 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:06.374850 1160645 api_server.go:52] waiting for apiserver process to appear ...
	I0214 03:02:06.374926 1160645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 03:02:06.390740 1160645 api_server.go:72] duration metric: took 15.895868ms to wait for apiserver process to appear ...
	I0214 03:02:06.390755 1160645 api_server.go:88] waiting for apiserver healthz status ...
	I0214 03:02:06.390772 1160645 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0214 03:02:06.405145 1160645 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0214 03:02:06.432549 1160645 api_server.go:141] control plane version: v1.28.4
	I0214 03:02:06.432569 1160645 api_server.go:131] duration metric: took 41.808199ms to wait for apiserver health ...
	I0214 03:02:06.432577 1160645 cni.go:84] Creating CNI manager for ""
	I0214 03:02:06.432583 1160645 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:02:06.434871 1160645 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 03:02:06.437188 1160645 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 03:02:06.444003 1160645 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0214 03:02:06.444018 1160645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 03:02:06.495459 1160645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 03:02:06.865891 1160645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 03:02:06.874359 1160645 system_pods.go:59] 8 kube-system pods found
	I0214 03:02:06.874376 1160645 system_pods.go:61] "coredns-5dd5756b68-jvd5k" [79cf7d44-3393-4acc-9a89-8c2696428c1f] Running
	I0214 03:02:06.874381 1160645 system_pods.go:61] "etcd-functional-991896" [9bfbc3db-6fd3-4e20-94e4-d07ff42c82f1] Running
	I0214 03:02:06.874385 1160645 system_pods.go:61] "kindnet-mh6zx" [d461098d-546c-422d-900a-eaa6fe79164a] Running
	I0214 03:02:06.874390 1160645 system_pods.go:61] "kube-apiserver-functional-991896" [ae520740-8862-4bff-9b06-2457c835adfc] Running
	I0214 03:02:06.874394 1160645 system_pods.go:61] "kube-controller-manager-functional-991896" [0ea10820-57e4-4fcc-aad1-2fc01345a4af] Running
	I0214 03:02:06.874401 1160645 system_pods.go:61] "kube-proxy-kd7sf" [309a145a-a578-407d-93ac-e7b34f958c71] Running
	I0214 03:02:06.874405 1160645 system_pods.go:61] "kube-scheduler-functional-991896" [51a222c9-61be-4c3c-80c1-8abee69a962e] Running
	I0214 03:02:06.874411 1160645 system_pods.go:61] "storage-provisioner" [3c8003a7-b2ec-4b9f-976e-b4eb23488340] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 03:02:06.874418 1160645 system_pods.go:74] duration metric: took 8.516635ms to wait for pod list to return data ...
	I0214 03:02:06.874426 1160645 node_conditions.go:102] verifying NodePressure condition ...
	I0214 03:02:06.877783 1160645 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 03:02:06.877802 1160645 node_conditions.go:123] node cpu capacity is 2
	I0214 03:02:06.877812 1160645 node_conditions.go:105] duration metric: took 3.381913ms to run NodePressure ...
	I0214 03:02:06.877828 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 03:02:07.094421 1160645 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0214 03:02:07.099525 1160645 retry.go:31] will retry after 148.268108ms: kubelet not initialised
	I0214 03:02:07.253836 1160645 retry.go:31] will retry after 229.785569ms: kubelet not initialised
	I0214 03:02:07.489801 1160645 kubeadm.go:787] kubelet initialised
	I0214 03:02:07.489811 1160645 kubeadm.go:788] duration metric: took 395.376995ms waiting for restarted kubelet to initialise ...
	I0214 03:02:07.489819 1160645 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:02:07.506775 1160645 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:07.515939 1160645 pod_ready.go:97] node "functional-991896" hosting pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.515954 1160645 pod_ready.go:81] duration metric: took 9.16208ms waiting for pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:07.515963 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "coredns-5dd5756b68-jvd5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.515989 1160645 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-991896" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:07.523249 1160645 pod_ready.go:97] node "functional-991896" hosting pod "etcd-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.523264 1160645 pod_ready.go:81] duration metric: took 7.261953ms waiting for pod "etcd-functional-991896" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:07.523273 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "etcd-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.523299 1160645 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-991896" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:07.531141 1160645 pod_ready.go:97] node "functional-991896" hosting pod "kube-apiserver-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.531156 1160645 pod_ready.go:81] duration metric: took 7.849554ms waiting for pod "kube-apiserver-functional-991896" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:07.531165 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-apiserver-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.531186 1160645 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-991896" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:07.670837 1160645 pod_ready.go:97] node "functional-991896" hosting pod "kube-controller-manager-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.670852 1160645 pod_ready.go:81] duration metric: took 139.657565ms waiting for pod "kube-controller-manager-functional-991896" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:07.670862 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-controller-manager-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:07.670884 1160645 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kd7sf" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:08.069960 1160645 pod_ready.go:97] node "functional-991896" hosting pod "kube-proxy-kd7sf" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:08.069980 1160645 pod_ready.go:81] duration metric: took 399.085537ms waiting for pod "kube-proxy-kd7sf" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:08.069990 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-proxy-kd7sf" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-991896" has status "Ready":"False"
	I0214 03:02:08.070010 1160645 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-991896" in "kube-system" namespace to be "Ready" ...
	I0214 03:02:08.466790 1160645 pod_ready.go:97] node "functional-991896" hosting pod "kube-scheduler-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	I0214 03:02:08.466808 1160645 pod_ready.go:81] duration metric: took 396.786907ms waiting for pod "kube-scheduler-functional-991896" in "kube-system" namespace to be "Ready" ...
	E0214 03:02:08.466818 1160645 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-991896" hosting pod "kube-scheduler-functional-991896" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	I0214 03:02:08.466844 1160645 pod_ready.go:38] duration metric: took 977.015813ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:02:08.466860 1160645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0214 03:02:08.477618 1160645 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0214 03:02:08.477631 1160645 kubeadm.go:640] restartCluster took 12.535065524s
	I0214 03:02:08.477639 1160645 kubeadm.go:406] StartCluster complete in 12.616272602s
	I0214 03:02:08.477662 1160645 settings.go:142] acquiring lock: {Name:mkcc971fda27c724b3c1908f1b3da87aea10d784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:02:08.477716 1160645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:02:08.478450 1160645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/kubeconfig: {Name:mkc9d4ef83ac02b186254a828f8611428408dff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:02:08.478741 1160645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 03:02:08.479002 1160645 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:02:08.479036 1160645 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 03:02:08.479092 1160645 addons.go:69] Setting storage-provisioner=true in profile "functional-991896"
	I0214 03:02:08.479105 1160645 addons.go:234] Setting addon storage-provisioner=true in "functional-991896"
	W0214 03:02:08.479110 1160645 addons.go:243] addon storage-provisioner should already be in state true
	I0214 03:02:08.479152 1160645 host.go:66] Checking if "functional-991896" exists ...
	I0214 03:02:08.479851 1160645 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	I0214 03:02:08.480356 1160645 addons.go:69] Setting default-storageclass=true in profile "functional-991896"
	I0214 03:02:08.480370 1160645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-991896"
	I0214 03:02:08.480671 1160645 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	W0214 03:02:08.481874 1160645 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-991896" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0214 03:02:08.481895 1160645 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0214 03:02:08.481962 1160645 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0214 03:02:08.486626 1160645 out.go:177] * Verifying Kubernetes components...
	I0214 03:02:08.489141 1160645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:02:08.516178 1160645 addons.go:234] Setting addon default-storageclass=true in "functional-991896"
	W0214 03:02:08.516189 1160645 addons.go:243] addon default-storageclass should already be in state true
	I0214 03:02:08.516210 1160645 host.go:66] Checking if "functional-991896" exists ...
	I0214 03:02:08.516661 1160645 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	I0214 03:02:08.578309 1160645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:02:08.580245 1160645 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 03:02:08.580257 1160645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 03:02:08.580328 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:02:08.610114 1160645 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 03:02:08.610126 1160645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 03:02:08.610204 1160645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:02:08.627131 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:02:08.663219 1160645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	E0214 03:02:08.683135 1160645 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0214 03:02:08.683154 1160645 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0214 03:02:08.683169 1160645 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0214 03:02:08.683234 1160645 node_ready.go:35] waiting up to 6m0s for node "functional-991896" to be "Ready" ...
	I0214 03:02:08.683560 1160645 node_ready.go:53] error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	I0214 03:02:08.683569 1160645 node_ready.go:38] duration metric: took 321.911µs waiting for node "functional-991896" to be "Ready" ...
	I0214 03:02:08.686911 1160645 out.go:177] 
	W0214 03:02:08.688680 1160645 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-991896": Get "https://192.168.49.2:8441/api/v1/nodes/functional-991896": dial tcp 192.168.49.2:8441: connect: connection refused
	W0214 03:02:08.688712 1160645 out.go:239] * 
	W0214 03:02:08.689729 1160645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 03:02:08.692239 1160645 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	51a6097d3e58d       04b4c447bb9d4       8 seconds ago        Running             kube-apiserver            1                   74a1b36f1f009       kube-apiserver-functional-991896
	d897622bddb54       ba04bb24b9575       9 seconds ago        Running             storage-provisioner       2                   06a40cca1b2a5       storage-provisioner
	d2d46b9852787       04b4eaa3d3db8       9 seconds ago        Running             kindnet-cni               1                   6a294c551d311       kindnet-mh6zx
	252459b15a307       97e04611ad434       9 seconds ago        Running             coredns                   1                   0bf28a38358cb       coredns-5dd5756b68-jvd5k
	6a61b69672281       3ca3ca488cf13       9 seconds ago        Running             kube-proxy                1                   dea25cb9f808e       kube-proxy-kd7sf
	2e0ef0e2fb337       04b4c447bb9d4       9 seconds ago        Exited              kube-apiserver            0                   74a1b36f1f009       kube-apiserver-functional-991896
	e6001396bdabd       ba04bb24b9575       26 seconds ago       Exited              storage-provisioner       1                   06a40cca1b2a5       storage-provisioner
	307767b829b18       97e04611ad434       42 seconds ago       Exited              coredns                   0                   0bf28a38358cb       coredns-5dd5756b68-jvd5k
	0820611a83e7b       04b4eaa3d3db8       56 seconds ago       Exited              kindnet-cni               0                   6a294c551d311       kindnet-mh6zx
	4f3111ac490b8       3ca3ca488cf13       56 seconds ago       Exited              kube-proxy                0                   dea25cb9f808e       kube-proxy-kd7sf
	a565e51d088ac       05c284c929889       About a minute ago   Running             kube-scheduler            0                   f2d510e64e146       kube-scheduler-functional-991896
	e08be804407a0       9961cbceaf234       About a minute ago   Running             kube-controller-manager   0                   5f2562da05d0f       kube-controller-manager-functional-991896
	b384015744a84       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   d929a556f2a64       etcd-functional-991896
	
	
	==> containerd <==
	Feb 14 03:02:07 functional-991896 containerd[3185]: time="2024-02-14T03:02:07.871320119Z" level=info msg="cleaning up dead shim"
	Feb 14 03:02:07 functional-991896 containerd[3185]: time="2024-02-14T03:02:07.885135967Z" level=info msg="StartContainer for \"d2d46b9852787a2d2b5ad2969283d32e39e8195de40ac64c74f9cc6ba11c6f44\" returns successfully"
	Feb 14 03:02:07 functional-991896 containerd[3185]: time="2024-02-14T03:02:07.891338735Z" level=warning msg="cleanup warnings time=\"2024-02-14T03:02:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3835 runtime=io.containerd.runc.v2\n"
	Feb 14 03:02:07 functional-991896 containerd[3185]: time="2024-02-14T03:02:07.937727728Z" level=info msg="StartContainer for \"d897622bddb5428b6275a2991703a026f757508beb3ee8ec6e5e3d1d7d187e61\" returns successfully"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.348117288Z" level=info msg="StopContainer for \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" with timeout 2 (s)"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.352739959Z" level=info msg="Stop container \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" with signal terminated"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.408245963Z" level=info msg="shim disconnected" id=301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.408500183Z" level=warning msg="cleaning up after shim disconnected" id=301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04 namespace=k8s.io
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.408605961Z" level=info msg="cleaning up dead shim"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.420064120Z" level=warning msg="cleanup warnings time=\"2024-02-14T03:02:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3975 runtime=io.containerd.runc.v2\n"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.554291300Z" level=info msg="CreateContainer within sandbox \"74a1b36f1f009b558ea3d00b90c134bfbeeaed1ec6962dad1195ab6dff9f397a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.651965968Z" level=info msg="CreateContainer within sandbox \"74a1b36f1f009b558ea3d00b90c134bfbeeaed1ec6962dad1195ab6dff9f397a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9\""
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.654206442Z" level=info msg="StartContainer for \"51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9\""
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.658103671Z" level=info msg="shim disconnected" id=28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.658332087Z" level=warning msg="cleaning up after shim disconnected" id=28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c namespace=k8s.io
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.658368451Z" level=info msg="cleaning up dead shim"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.685955152Z" level=warning msg="cleanup warnings time=\"2024-02-14T03:02:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4054 runtime=io.containerd.runc.v2\n"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.690005213Z" level=info msg="StopContainer for \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" returns successfully"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.691439739Z" level=info msg="StopPodSandbox for \"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04\""
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.693333294Z" level=info msg="Container to stop \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.702191580Z" level=info msg="TearDown network for sandbox \"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04\" successfully"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.702342739Z" level=info msg="StopPodSandbox for \"301cbe76c7ec63d8a3ffb6b662b2b84714fc00db8f0070ca82998fcf536d2f04\" returns successfully"
	Feb 14 03:02:08 functional-991896 containerd[3185]: time="2024-02-14T03:02:08.779414398Z" level=info msg="StartContainer for \"51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9\" returns successfully"
	Feb 14 03:02:09 functional-991896 containerd[3185]: time="2024-02-14T03:02:09.545337320Z" level=info msg="RemoveContainer for \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\""
	Feb 14 03:02:09 functional-991896 containerd[3185]: time="2024-02-14T03:02:09.562810921Z" level=info msg="RemoveContainer for \"28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c\" returns successfully"
	
	
	==> coredns [252459b15a307abc1e89512a3fa3dfdd24455b22928ea22fc0cb3c5a5adace30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] 127.0.0.1:50264 - 44066 "HINFO IN 2676038425055356850.1973071159548491042. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048759318s
	
	
	==> coredns [307767b829b1899cfc2092598573aa8d7016aa97e3fc61b4164be40d9d8cf832] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49378 - 9032 "HINFO IN 9192085943000834440.4140252530550617898. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023404045s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-991896
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-991896
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40f210e92693e4612e04be0697de06db21ac5cf0
	                    minikube.k8s.io/name=functional-991896
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T03_01_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 03:01:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-991896
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 03:02:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 03:02:16 +0000   Wed, 14 Feb 2024 03:00:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 03:02:16 +0000   Wed, 14 Feb 2024 03:00:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 03:02:16 +0000   Wed, 14 Feb 2024 03:00:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 03:02:16 +0000   Wed, 14 Feb 2024 03:02:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-991896
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c459f69322ab46b9a77a19277aba5e03
	  System UUID:                b35d1f76-a222-47f8-8c90-bbc2bdc29ed3
	  Boot ID:                    b6f8a130-5377-4a84-9795-3edbfc6d2fc5
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jvd5k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     59s
	  kube-system                 etcd-functional-991896                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kindnet-mh6zx                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      59s
	  kube-system                 kube-apiserver-functional-991896             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-functional-991896    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-kd7sf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-functional-991896             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node functional-991896 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node functional-991896 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node functional-991896 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node functional-991896 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node functional-991896 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet          Node functional-991896 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             72s                kubelet          Node functional-991896 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                61s                kubelet          Node functional-991896 status is now: NodeReady
	  Normal  RegisteredNode           59s                node-controller  Node functional-991896 event: Registered Node functional-991896 in Controller
	  Normal  Starting                 11s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s                kubelet          Node functional-991896 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s                kubelet          Node functional-991896 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                kubelet          Node functional-991896 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11s                kubelet          Node functional-991896 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                1s                 kubelet          Node functional-991896 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001133] FS-Cache: O-key=[8] '2bd5c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001075] FS-Cache: N-key=[8] '2bd5c90000000000'
	[  +0.002828] FS-Cache: Duplicate cookie detected
	[  +0.000708] FS-Cache: O-cookie c=0000003b [p=00000039 fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=0000000076fc1031
	[  +0.001081] FS-Cache: O-key=[8] '2bd5c90000000000'
	[  +0.000709] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000005e2f857b
	[  +0.001050] FS-Cache: N-key=[8] '2bd5c90000000000'
	[  +2.757072] FS-Cache: Duplicate cookie detected
	[  +0.000789] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000994] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=0000000073828904
	[  +0.001121] FS-Cache: O-key=[8] '2ad5c90000000000'
	[  +0.000813] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001101] FS-Cache: N-key=[8] '2ad5c90000000000'
	[  +0.290556] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000975] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=00000000eab8090b
	[  +0.001047] FS-Cache: O-key=[8] '30d5c90000000000'
	[  +0.000761] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=00000000bc792bf3
	[  +0.001026] FS-Cache: N-key=[8] '30d5c90000000000'
	
	
	==> etcd [b384015744a842e5388fa4180b1d90a48d2508673c2cf8f894fc985b9008dac3] <==
	{"level":"info","ts":"2024-02-14T03:00:58.756506Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:00:58.756549Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:00:58.75656Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:00:58.756846Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T03:00:58.756864Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T03:00:58.75739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-14T03:00:58.757465Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-14T03:00:59.435597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-14T03:00:59.435649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-14T03:00:59.435665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-02-14T03:00:59.435689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:59.435732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:59.435764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:59.435795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:59.439684Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-991896 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T03:00:59.439841Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:00:59.440928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-14T03:00:59.44759Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:59.447871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:00:59.45582Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:59.458708Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:59.458874Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:59.462956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T03:00:59.465966Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T03:00:59.468002Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:02:17 up  5:44,  0 users,  load average: 2.00, 1.87, 1.89
	Linux functional-991896 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [0820611a83e7bad0751687b6433d5fb5ffb3bece7242071f663e9452989d668f] <==
	I0214 03:01:21.211906       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 03:01:21.212165       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0214 03:01:21.212388       1 main.go:116] setting mtu 1500 for CNI 
	I0214 03:01:21.212479       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 03:01:21.212585       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 03:01:21.506870       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:01:21.507181       1 main.go:227] handling current node
	I0214 03:01:31.523577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:01:31.523605       1 main.go:227] handling current node
	I0214 03:01:41.536605       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:01:41.536647       1 main.go:227] handling current node
	I0214 03:01:51.541484       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:01:51.541525       1 main.go:227] handling current node
	
	
	==> kindnet [d2d46b9852787a2d2b5ad2969283d32e39e8195de40ac64c74f9cc6ba11c6f44] <==
	I0214 03:02:07.914981       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 03:02:07.915236       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0214 03:02:07.915637       1 main.go:116] setting mtu 1500 for CNI 
	I0214 03:02:07.915780       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 03:02:07.915885       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 03:02:08.303849       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:02:08.303889       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2e0ef0e2fb337ae9b049785c43a6b6c91df3123b8022702bc044cc700a168e34] <==
	I0214 03:02:07.725280       1 options.go:220] external host was not specified, using 192.168.49.2
	I0214 03:02:07.726333       1 server.go:148] Version: v1.28.4
	I0214 03:02:07.726365       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0214 03:02:07.726591       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [51a6097d3e58dde050ab0e12ab44af7ad10c84185cfd4809c94423cafb2169e9] <==
	I0214 03:02:12.408702       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0214 03:02:12.408855       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0214 03:02:12.449899       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0214 03:02:12.450245       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0214 03:02:12.760290       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0214 03:02:12.771534       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0214 03:02:12.972348       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0214 03:02:12.972629       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 03:02:12.973413       1 shared_informer.go:318] Caches are synced for configmaps
	I0214 03:02:12.972532       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 03:02:12.982849       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0214 03:02:12.982879       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0214 03:02:12.986324       1 aggregator.go:166] initial CRD sync complete...
	I0214 03:02:12.987584       1 autoregister_controller.go:141] Starting autoregister controller
	I0214 03:02:12.987756       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 03:02:12.987859       1 cache.go:39] Caches are synced for autoregister controller
	I0214 03:02:12.994558       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0214 03:02:13.006238       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0214 03:02:13.008970       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0214 03:02:13.015850       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0214 03:02:13.018970       1 trace.go:236] Trace[1256381496]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6aaff72e-d44c-4b15-ba5f-2c3a87764171,client:192.168.49.2,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-991896,user-agent:kubelet/v1.28.4 (linux/arm64) kubernetes/bae2c62,verb:DELETE (14-Feb-2024 03:02:12.510) (total time: 508ms):
	Trace[1256381496]: [508.370281ms] [508.370281ms] END
	I0214 03:02:13.189548       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 03:02:16.669685       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 03:02:16.674408       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e08be804407a00ca1545d87ebb13d2dd12af52395c0c4d73c6a03a89a649fb2a] <==
	E0214 03:02:12.733225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ClusterRole: unknown (get clusterroles.rbac.authorization.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50858->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: unknown (get runtimeclasses.node.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50842->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodTemplate: unknown (get podtemplates) - error from a previous attempt: read tcp 192.168.49.2:50830->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:50822->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:50806->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.733849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50802->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.734011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ServiceAccount: unknown (get serviceaccounts) - error from a previous attempt: read tcp 192.168.49.2:50800->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.734162       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: unknown - error from a previous attempt: read tcp 192.168.49.2:50698->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.735714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1beta3.PriorityLevelConfiguration: unknown (get prioritylevelconfigurations.flowcontrol.apiserver.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50774->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.735816       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50772->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.735969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Lease: unknown (get leases.coordination.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50756->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.736043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50752->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.736115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:50742->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.736269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50728->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.738875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:50708->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.738978       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps) - error from a previous attempt: read tcp 192.168.49.2:50686->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Ingress: unknown (get ingresses.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50684->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50666->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Deployment: unknown (get deployments.apps) - error from a previous attempt: read tcp 192.168.49.2:50566->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Role: unknown (get roles.rbac.authorization.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50724->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.739403       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1beta3.FlowSchema: unknown (get flowschemas.flowcontrol.apiserver.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50622->192.168.49.2:8441: read: connection reset by peer
	I0214 03:02:13.700204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.015µs"
	I0214 03:02:14.303286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.592µs"
	I0214 03:02:16.682925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.564512ms"
	I0214 03:02:16.683022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.096µs"
	
	
	==> kube-proxy [4f3111ac490b85ec149b886f9f8d41a69c3c1df994ac5ded340805b14ef9d364] <==
	I0214 03:01:21.026971       1 server_others.go:69] "Using iptables proxy"
	I0214 03:01:21.053831       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0214 03:01:21.077490       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 03:01:21.079326       1 server_others.go:152] "Using iptables Proxier"
	I0214 03:01:21.079421       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 03:01:21.079852       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 03:01:21.079976       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 03:01:21.080412       1 server.go:846] "Version info" version="v1.28.4"
	I0214 03:01:21.080816       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 03:01:21.082424       1 config.go:188] "Starting service config controller"
	I0214 03:01:21.082816       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 03:01:21.082990       1 config.go:97] "Starting endpoint slice config controller"
	I0214 03:01:21.083080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 03:01:21.083968       1 config.go:315] "Starting node config controller"
	I0214 03:01:21.084079       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 03:01:21.183836       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0214 03:01:21.183942       1 shared_informer.go:318] Caches are synced for service config
	I0214 03:01:21.184210       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [6a61b69672281a143756e74f626edc7cd2d1fda0d86f58cf826e5cff82bb3e3b] <==
	I0214 03:02:07.903207       1 server_others.go:69] "Using iptables proxy"
	I0214 03:02:07.945190       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0214 03:02:08.036871       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 03:02:08.038956       1 server_others.go:152] "Using iptables Proxier"
	I0214 03:02:08.038999       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 03:02:08.039009       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 03:02:08.039069       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 03:02:08.039339       1 server.go:846] "Version info" version="v1.28.4"
	I0214 03:02:08.039359       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 03:02:08.040697       1 config.go:188] "Starting service config controller"
	I0214 03:02:08.040732       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 03:02:08.040812       1 config.go:97] "Starting endpoint slice config controller"
	I0214 03:02:08.040822       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 03:02:08.041345       1 config.go:315] "Starting node config controller"
	I0214 03:02:08.041361       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 03:02:08.141378       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0214 03:02:08.141383       1 shared_informer.go:318] Caches are synced for service config
	I0214 03:02:08.141457       1 shared_informer.go:318] Caches are synced for node config
	W0214 03:02:08.423932       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0214 03:02:08.423995       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0214 03:02:08.424019       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	
	
	==> kube-scheduler [a565e51d088ac2fc1885e415520e305cb46dd5eea2627415c26e3f77c806abc6] <==
	E0214 03:01:03.400143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0214 03:01:03.400343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0214 03:01:03.400551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0214 03:01:03.400756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 03:01:03.400954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 03:01:03.400854       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0214 03:01:03.401213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0214 03:01:03.401377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 03:01:03.401504       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0214 03:01:04.482818       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0214 03:02:12.848342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:50398->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.849626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:50424->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.849907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:50446->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.850143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:50462->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.850445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:50440->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.850659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:50434->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.854790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50454->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.855602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:50384->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.855765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:50364->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.855920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50422->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.856070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50370->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.856230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:50346->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.856381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:50350->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.856551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:50414->192.168.49.2:8441: read: connection reset by peer
	E0214 03:02:12.881259       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50248->192.168.49.2:8441: read: connection reset by peer
	
	
	==> kubelet <==
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.540367    3564 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-991896" podUID="ae520740-8862-4bff-9b06-2457c835adfc"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.540859    3564 status_manager.go:853] "Failed to get status for pod" podUID="309a145a-a578-407d-93ac-e7b34f958c71" pod="kube-system/kube-proxy-kd7sf" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kd7sf\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: E0214 03:02:08.541039    3564 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-991896\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-991896"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.541166    3564 scope.go:117] "RemoveContainer" containerID="2e0ef0e2fb337ae9b049785c43a6b6c91df3123b8022702bc044cc700a168e34"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.541935    3564 status_manager.go:853] "Failed to get status for pod" podUID="79cf7d44-3393-4acc-9a89-8c2696428c1f" pod="kube-system/coredns-5dd5756b68-jvd5k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jvd5k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.542162    3564 status_manager.go:853] "Failed to get status for pod" podUID="379ddb1d57c8632e0c8c7b8af30cbaf4" pod="kube-system/kube-apiserver-functional-991896" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-991896\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.542319    3564 status_manager.go:853] "Failed to get status for pod" podUID="3c8003a7-b2ec-4b9f-976e-b4eb23488340" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.542618    3564 status_manager.go:853] "Failed to get status for pod" podUID="d461098d-546c-422d-900a-eaa6fe79164a" pod="kube-system/kindnet-mh6zx" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-mh6zx\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: E0214 03:02:08.546785    3564 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-991896.17b39b903cae033f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"500", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-991896", UID:"379ddb1d57c8632e0c8c7b8af30cbaf4", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Pulled", Message:"Container image \"registry.k8s.io/kube-apiserver:v1.28.4\" already present on ma
chine", Source:v1.EventSource{Component:"kubelet", Host:"functional-991896"}, FirstTimestamp:time.Date(2024, time.February, 14, 3, 2, 7, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 14, 3, 2, 8, 545625706, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-991896"}': 'Patch "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-991896.17b39b903cae033f": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.649456    3564 status_manager.go:853] "Failed to get status for pod" podUID="e8f785d6d77d9f3c8770b2490e72cd74" pod="kube-system/kube-controller-manager-functional-991896" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-991896\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.649867    3564 status_manager.go:853] "Failed to get status for pod" podUID="3c8003a7-b2ec-4b9f-976e-b4eb23488340" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.650118    3564 status_manager.go:853] "Failed to get status for pod" podUID="d461098d-546c-422d-900a-eaa6fe79164a" pod="kube-system/kindnet-mh6zx" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-mh6zx\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.650351    3564 status_manager.go:853] "Failed to get status for pod" podUID="309a145a-a578-407d-93ac-e7b34f958c71" pod="kube-system/kube-proxy-kd7sf" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-kd7sf\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.650952    3564 status_manager.go:853] "Failed to get status for pod" podUID="79cf7d44-3393-4acc-9a89-8c2696428c1f" pod="kube-system/coredns-5dd5756b68-jvd5k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jvd5k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:08 functional-991896 kubelet[3564]: I0214 03:02:08.651461    3564 status_manager.go:853] "Failed to get status for pod" podUID="379ddb1d57c8632e0c8c7b8af30cbaf4" pod="kube-system/kube-apiserver-functional-991896" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-991896\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 14 03:02:09 functional-991896 kubelet[3564]: I0214 03:02:09.542890    3564 scope.go:117] "RemoveContainer" containerID="28f41dc60fa1551402325108769c398674c984fdb2e87b4960c798b54c845b1c"
	Feb 14 03:02:09 functional-991896 kubelet[3564]: I0214 03:02:09.559076    3564 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-991896" podUID="ae520740-8862-4bff-9b06-2457c835adfc"
	Feb 14 03:02:10 functional-991896 kubelet[3564]: I0214 03:02:10.347533    3564 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1dbc4f3504298fd95e33ef4f99ee62f2" path="/var/lib/kubelet/pods/1dbc4f3504298fd95e33ef4f99ee62f2/volumes"
	Feb 14 03:02:12 functional-991896 kubelet[3564]: E0214 03:02:12.586333    3564 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50290->192.168.49.2:8441: read: connection reset by peer
	Feb 14 03:02:12 functional-991896 kubelet[3564]: E0214 03:02:12.600316    3564 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50276->192.168.49.2:8441: read: connection reset by peer
	Feb 14 03:02:12 functional-991896 kubelet[3564]: E0214 03:02:12.600477    3564 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:50324->192.168.49.2:8441: read: connection reset by peer
	Feb 14 03:02:13 functional-991896 kubelet[3564]: I0214 03:02:13.019426    3564 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-991896"
	Feb 14 03:02:13 functional-991896 kubelet[3564]: I0214 03:02:13.567780    3564 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-991896" podUID="ae520740-8862-4bff-9b06-2457c835adfc"
	Feb 14 03:02:15 functional-991896 kubelet[3564]: I0214 03:02:15.024831    3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-991896" podStartSLOduration=2.024726059 podCreationTimestamp="2024-02-14 03:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 03:02:13.584190434 +0000 UTC m=+7.439209740" watchObservedRunningTime="2024-02-14 03:02:15.024726059 +0000 UTC m=+8.879745349"
	Feb 14 03:02:16 functional-991896 kubelet[3564]: I0214 03:02:16.651676    3564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [d897622bddb5428b6275a2991703a026f757508beb3ee8ec6e5e3d1d7d187e61] <==
	I0214 03:02:07.953094       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 03:02:08.000510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 03:02:08.000562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e6001396bdabd76768875fe539f0fda9c9976792c2755715ad9f93b45d6d3786] <==
	I0214 03:01:51.062958       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 03:01:51.094670       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 03:01:51.094741       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 03:01:51.105892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 03:01:51.106764       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-991896_a6a8af35-3a6a-48cc-af8f-ff9f46abfab3!
	I0214 03:01:51.106284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82706228-3b48-4dab-b5b4-5bb35f7a8242", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-991896_a6a8af35-3a6a-48cc-af8f-ff9f46abfab3 became leader
	I0214 03:01:51.207567       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-991896_a6a8af35-3a6a-48cc-af8f-ff9f46abfab3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-991896 -n functional-991896
helpers_test.go:261: (dbg) Run:  kubectl --context functional-991896 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image load --daemon gcr.io/google-containers/addon-resizer:functional-991896 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 image load --daemon gcr.io/google-containers/addon-resizer:functional-991896 --alsologtostderr: (4.551477959s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-991896" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image load --daemon gcr.io/google-containers/addon-resizer:functional-991896 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 image load --daemon gcr.io/google-containers/addon-resizer:functional-991896 --alsologtostderr: (3.317319764s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-991896" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.701274021s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-991896
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image load --daemon gcr.io/google-containers/addon-resizer:functional-991896 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 image load --daemon gcr.io/google-containers/addon-resizer:functional-991896 --alsologtostderr: (3.034624846s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-991896" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image save gcr.io/google-containers/addon-resizer:functional-991896 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0214 03:03:24.771099 1168322 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:03:24.772262 1168322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:03:24.772277 1168322 out.go:304] Setting ErrFile to fd 2...
	I0214 03:03:24.772283 1168322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:03:24.772571 1168322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:03:24.773282 1168322 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:03:24.773417 1168322 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:03:24.773919 1168322 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
	I0214 03:03:24.790634 1168322 ssh_runner.go:195] Run: systemctl --version
	I0214 03:03:24.790719 1168322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
	I0214 03:03:24.807159 1168322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
	I0214 03:03:24.899840 1168322 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0214 03:03:24.899905 1168322 cache_images.go:254] Failed to load cached images for profile functional-991896. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0214 03:03:24.899929 1168322 cache_images.go:262] succeeded pushing to: 
	I0214 03:03:24.899934 1168322 cache_images.go:263] failed pushing to: functional-991896

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (53.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-089373 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-089373 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.112827275s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-089373 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-089373 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f5c4e1e1-f581-44ad-ad70-2d05c4bfeca6] Pending
helpers_test.go:344: "nginx" [f5c4e1e1-f581-44ad-ad70-2d05c4bfeca6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f5c4e1e1-f581-44ad-ad70-2d05c4bfeca6] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.007308744s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-089373 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-089373 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-089373 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.029576011s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-089373 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-089373 addons disable ingress-dns --alsologtostderr -v=1: (10.108036361s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-089373 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-089373 addons disable ingress --alsologtostderr -v=1: (7.56472672s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-089373
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-089373:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27d06981ff46da39ab15cfa0d00f4082298c886da08cdd43b876b449ad37eb94",
	        "Created": "2024-02-14T03:04:02.962261804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1169464,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:04:03.273139751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/27d06981ff46da39ab15cfa0d00f4082298c886da08cdd43b876b449ad37eb94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27d06981ff46da39ab15cfa0d00f4082298c886da08cdd43b876b449ad37eb94/hostname",
	        "HostsPath": "/var/lib/docker/containers/27d06981ff46da39ab15cfa0d00f4082298c886da08cdd43b876b449ad37eb94/hosts",
	        "LogPath": "/var/lib/docker/containers/27d06981ff46da39ab15cfa0d00f4082298c886da08cdd43b876b449ad37eb94/27d06981ff46da39ab15cfa0d00f4082298c886da08cdd43b876b449ad37eb94-json.log",
	        "Name": "/ingress-addon-legacy-089373",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-089373:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-089373",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fa8231c2da3e7dfbb0faa34f9ff0a323c16c0818fc16c9655d7a9622485d059b-init/diff:/var/lib/docker/overlay2/2b57dacbb0185892ad2774651ca7e304a0e7ce49c55385fdb5828fd98438b35e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8231c2da3e7dfbb0faa34f9ff0a323c16c0818fc16c9655d7a9622485d059b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8231c2da3e7dfbb0faa34f9ff0a323c16c0818fc16c9655d7a9622485d059b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8231c2da3e7dfbb0faa34f9ff0a323c16c0818fc16c9655d7a9622485d059b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-089373",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-089373/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-089373",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-089373",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-089373",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "35c28075b0f002b51e892c5ad8f0319483ae06e93f4299a442127485cf38da3d",
	            "SandboxKey": "/var/run/docker/netns/35c28075b0f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34049"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-089373": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "27d06981ff46",
	                        "ingress-addon-legacy-089373"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "71f4c7c28be56b119e1cf90bab3c52ef1d5509933b2ebc47d528edaf5253c3df",
	                    "EndpointID": "38896db18fd256bae05f068c8c301dac1b8954bd575e7610169175fde0447693",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-089373",
	                        "27d06981ff46"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-089373 -n ingress-addon-legacy-089373
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-089373 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-089373 logs -n 25: (1.277135542s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-991896 image ls                                                   | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	| image   | functional-991896 image load --daemon                                        | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-991896                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896 image ls                                                   | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	| image   | functional-991896 image load --daemon                                        | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-991896                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896 image ls                                                   | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	| image   | functional-991896 image save                                                 | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-991896                     |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896 image rm                                                   | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-991896                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896 image ls                                                   | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	| image   | functional-991896 image load                                                 | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896 image save --daemon                                        | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-991896                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896                                                            | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896                                                            | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | image ls --format json                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896                                                            | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | image ls --format table                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-991896 ssh pgrep                                                  | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-991896                                                            | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-991896 image build -t                                             | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	|         | localhost/my-image:functional-991896                                         |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image   | functional-991896 image ls                                                   | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	| delete  | -p functional-991896                                                         | functional-991896           | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:03 UTC |
	| start   | -p ingress-addon-legacy-089373                                               | ingress-addon-legacy-089373 | jenkins | v1.32.0 | 14 Feb 24 03:03 UTC | 14 Feb 24 03:05 UTC |
	|         | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=containerd                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-089373                                                  | ingress-addon-legacy-089373 | jenkins | v1.32.0 | 14 Feb 24 03:05 UTC | 14 Feb 24 03:05 UTC |
	|         | addons enable ingress                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-089373                                                  | ingress-addon-legacy-089373 | jenkins | v1.32.0 | 14 Feb 24 03:05 UTC | 14 Feb 24 03:05 UTC |
	|         | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-089373                                                  | ingress-addon-legacy-089373 | jenkins | v1.32.0 | 14 Feb 24 03:05 UTC | 14 Feb 24 03:05 UTC |
	|         | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-089373 ip                                               | ingress-addon-legacy-089373 | jenkins | v1.32.0 | 14 Feb 24 03:05 UTC | 14 Feb 24 03:05 UTC |
	| addons  | ingress-addon-legacy-089373                                                  | ingress-addon-legacy-089373 | jenkins | v1.32.0 | 14 Feb 24 03:06 UTC | 14 Feb 24 03:06 UTC |
	|         | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-089373                                                  | ingress-addon-legacy-089373 | jenkins | v1.32.0 | 14 Feb 24 03:06 UTC | 14 Feb 24 03:06 UTC |
	|         | addons disable ingress                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 03:03:31
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 03:03:31.390595 1169013 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:03:31.390759 1169013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:03:31.390770 1169013 out.go:304] Setting ErrFile to fd 2...
	I0214 03:03:31.390777 1169013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:03:31.391024 1169013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:03:31.391440 1169013 out.go:298] Setting JSON to false
	I0214 03:03:31.392322 1169013 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20757,"bootTime":1707859054,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 03:03:31.392398 1169013 start.go:138] virtualization:  
	I0214 03:03:31.394828 1169013 out.go:177] * [ingress-addon-legacy-089373] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 03:03:31.397175 1169013 out.go:177]   - MINIKUBE_LOCATION=18166
	I0214 03:03:31.398943 1169013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:03:31.397258 1169013 notify.go:220] Checking for updates...
	I0214 03:03:31.403362 1169013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:03:31.405120 1169013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 03:03:31.407209 1169013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:03:31.409050 1169013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:03:31.411116 1169013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:03:31.432708 1169013 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:03:31.432821 1169013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:03:31.504562 1169013 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-14 03:03:31.494130736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:03:31.504670 1169013 docker.go:295] overlay module found
	I0214 03:03:31.507043 1169013 out.go:177] * Using the docker driver based on user configuration
	I0214 03:03:31.509475 1169013 start.go:298] selected driver: docker
	I0214 03:03:31.509506 1169013 start.go:902] validating driver "docker" against <nil>
	I0214 03:03:31.509520 1169013 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:03:31.510139 1169013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:03:31.574067 1169013 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-14 03:03:31.565350169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:03:31.574231 1169013 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 03:03:31.574464 1169013 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 03:03:31.576590 1169013 out.go:177] * Using Docker driver with root privileges
	I0214 03:03:31.578632 1169013 cni.go:84] Creating CNI manager for ""
	I0214 03:03:31.578659 1169013 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:03:31.578670 1169013 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 03:03:31.578688 1169013 start_flags.go:321] config:
	{Name:ingress-addon-legacy-089373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-089373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:03:31.580933 1169013 out.go:177] * Starting control plane node ingress-addon-legacy-089373 in cluster ingress-addon-legacy-089373
	I0214 03:03:31.582880 1169013 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0214 03:03:31.585058 1169013 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 03:03:31.586940 1169013 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0214 03:03:31.587023 1169013 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 03:03:31.602694 1169013 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 03:03:31.602719 1169013 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 03:03:31.660912 1169013 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0214 03:03:31.660944 1169013 cache.go:56] Caching tarball of preloaded images
	I0214 03:03:31.661116 1169013 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0214 03:03:31.663292 1169013 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0214 03:03:31.665054 1169013 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0214 03:03:31.816500 1169013 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0214 03:03:55.055522 1169013 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0214 03:03:55.055649 1169013 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0214 03:03:56.251117 1169013 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0214 03:03:56.251519 1169013 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/config.json ...
	I0214 03:03:56.251552 1169013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/config.json: {Name:mka4c90720ecdd895bd9d9684a372bf563542d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:03:56.251757 1169013 cache.go:194] Successfully downloaded all kic artifacts
	I0214 03:03:56.251794 1169013 start.go:365] acquiring machines lock for ingress-addon-legacy-089373: {Name:mk89c7b278dd867c968d22f467b687fb603c0671 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 03:03:56.251851 1169013 start.go:369] acquired machines lock for "ingress-addon-legacy-089373" in 39.851µs
	I0214 03:03:56.251869 1169013 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-089373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-089373 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0214 03:03:56.251941 1169013 start.go:125] createHost starting for "" (driver="docker")
	I0214 03:03:56.254483 1169013 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0214 03:03:56.254716 1169013 start.go:159] libmachine.API.Create for "ingress-addon-legacy-089373" (driver="docker")
	I0214 03:03:56.254743 1169013 client.go:168] LocalClient.Create starting
	I0214 03:03:56.254802 1169013 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem
	I0214 03:03:56.254843 1169013 main.go:141] libmachine: Decoding PEM data...
	I0214 03:03:56.254864 1169013 main.go:141] libmachine: Parsing certificate...
	I0214 03:03:56.254928 1169013 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem
	I0214 03:03:56.254953 1169013 main.go:141] libmachine: Decoding PEM data...
	I0214 03:03:56.254967 1169013 main.go:141] libmachine: Parsing certificate...
	I0214 03:03:56.255355 1169013 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-089373 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 03:03:56.272287 1169013 cli_runner.go:211] docker network inspect ingress-addon-legacy-089373 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 03:03:56.272385 1169013 network_create.go:281] running [docker network inspect ingress-addon-legacy-089373] to gather additional debugging logs...
	I0214 03:03:56.272408 1169013 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-089373
	W0214 03:03:56.287281 1169013 cli_runner.go:211] docker network inspect ingress-addon-legacy-089373 returned with exit code 1
	I0214 03:03:56.287316 1169013 network_create.go:284] error running [docker network inspect ingress-addon-legacy-089373]: docker network inspect ingress-addon-legacy-089373: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-089373 not found
	I0214 03:03:56.287330 1169013 network_create.go:286] output of [docker network inspect ingress-addon-legacy-089373]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-089373 not found
	
	** /stderr **
	I0214 03:03:56.287436 1169013 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 03:03:56.303172 1169013 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004eb3a0}
	I0214 03:03:56.303217 1169013 network_create.go:124] attempt to create docker network ingress-addon-legacy-089373 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0214 03:03:56.303277 1169013 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-089373 ingress-addon-legacy-089373
	I0214 03:03:56.362744 1169013 network_create.go:108] docker network ingress-addon-legacy-089373 192.168.49.0/24 created
	I0214 03:03:56.362780 1169013 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-089373" container
	I0214 03:03:56.362866 1169013 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 03:03:56.376738 1169013 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-089373 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-089373 --label created_by.minikube.sigs.k8s.io=true
	I0214 03:03:56.393243 1169013 oci.go:103] Successfully created a docker volume ingress-addon-legacy-089373
	I0214 03:03:56.393902 1169013 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-089373-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-089373 --entrypoint /usr/bin/test -v ingress-addon-legacy-089373:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0214 03:03:57.887352 1169013 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-089373-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-089373 --entrypoint /usr/bin/test -v ingress-addon-legacy-089373:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.49340394s)
	I0214 03:03:57.887387 1169013 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-089373
	I0214 03:03:57.887407 1169013 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0214 03:03:57.887427 1169013 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 03:03:57.887534 1169013 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-089373:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 03:04:02.890439 1169013 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-089373:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.002859664s)
	I0214 03:04:02.890477 1169013 kic.go:203] duration metric: took 5.003047 seconds to extract preloaded images to volume
	W0214 03:04:02.890641 1169013 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 03:04:02.890758 1169013 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 03:04:02.948064 1169013 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-089373 --name ingress-addon-legacy-089373 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-089373 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-089373 --network ingress-addon-legacy-089373 --ip 192.168.49.2 --volume ingress-addon-legacy-089373:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0214 03:04:03.285310 1169013 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-089373 --format={{.State.Running}}
	I0214 03:04:03.309347 1169013 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-089373 --format={{.State.Status}}
	I0214 03:04:03.330648 1169013 cli_runner.go:164] Run: docker exec ingress-addon-legacy-089373 stat /var/lib/dpkg/alternatives/iptables
	I0214 03:04:03.388315 1169013 oci.go:144] the created container "ingress-addon-legacy-089373" has a running status.
	I0214 03:04:03.388342 1169013 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa...
	I0214 03:04:04.355967 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0214 03:04:04.356016 1169013 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 03:04:04.380168 1169013 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-089373 --format={{.State.Status}}
	I0214 03:04:04.404386 1169013 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 03:04:04.404410 1169013 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-089373 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 03:04:04.457109 1169013 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-089373 --format={{.State.Status}}
	I0214 03:04:04.485525 1169013 machine.go:88] provisioning docker machine ...
	I0214 03:04:04.485557 1169013 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-089373"
	I0214 03:04:04.485629 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:04.503737 1169013 main.go:141] libmachine: Using SSH client type: native
	I0214 03:04:04.504251 1169013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34052 <nil> <nil>}
	I0214 03:04:04.504276 1169013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-089373 && echo "ingress-addon-legacy-089373" | sudo tee /etc/hostname
	I0214 03:04:04.649392 1169013 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-089373
	
	I0214 03:04:04.649500 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:04.666260 1169013 main.go:141] libmachine: Using SSH client type: native
	I0214 03:04:04.666669 1169013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34052 <nil> <nil>}
	I0214 03:04:04.666694 1169013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-089373' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-089373/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-089373' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 03:04:04.795717 1169013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 03:04:04.795746 1169013 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18166-1129740/.minikube CaCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18166-1129740/.minikube}
	I0214 03:04:04.795774 1169013 ubuntu.go:177] setting up certificates
	I0214 03:04:04.795784 1169013 provision.go:83] configureAuth start
	I0214 03:04:04.795859 1169013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-089373
	I0214 03:04:04.812031 1169013 provision.go:138] copyHostCerts
	I0214 03:04:04.812075 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem
	I0214 03:04:04.812108 1169013 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem, removing ...
	I0214 03:04:04.812120 1169013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem
	I0214 03:04:04.812198 1169013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/key.pem (1675 bytes)
	I0214 03:04:04.812295 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem
	I0214 03:04:04.812319 1169013 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem, removing ...
	I0214 03:04:04.812327 1169013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem
	I0214 03:04:04.812354 1169013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.pem (1082 bytes)
	I0214 03:04:04.812400 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem
	I0214 03:04:04.812419 1169013 exec_runner.go:144] found /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem, removing ...
	I0214 03:04:04.812424 1169013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem
	I0214 03:04:04.812460 1169013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18166-1129740/.minikube/cert.pem (1123 bytes)
	I0214 03:04:04.812509 1169013 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-089373 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-089373]
	I0214 03:04:05.969823 1169013 provision.go:172] copyRemoteCerts
	I0214 03:04:05.969892 1169013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 03:04:05.969934 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:05.985345 1169013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa Username:docker}
	I0214 03:04:06.088751 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0214 03:04:06.088816 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 03:04:06.113269 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0214 03:04:06.113332 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0214 03:04:06.135965 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0214 03:04:06.136032 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 03:04:06.159421 1169013 provision.go:86] duration metric: configureAuth took 1.363616049s
	I0214 03:04:06.159448 1169013 ubuntu.go:193] setting minikube options for container-runtime
	I0214 03:04:06.159668 1169013 config.go:182] Loaded profile config "ingress-addon-legacy-089373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0214 03:04:06.159682 1169013 machine.go:91] provisioned docker machine in 1.674135716s
	I0214 03:04:06.159689 1169013 client.go:171] LocalClient.Create took 9.904939872s
	I0214 03:04:06.159703 1169013 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-089373" took 9.904986994s
	I0214 03:04:06.159715 1169013 start.go:300] post-start starting for "ingress-addon-legacy-089373" (driver="docker")
	I0214 03:04:06.159728 1169013 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 03:04:06.159782 1169013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 03:04:06.159826 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:06.175777 1169013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa Username:docker}
	I0214 03:04:06.271539 1169013 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 03:04:06.274934 1169013 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 03:04:06.274975 1169013 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 03:04:06.274991 1169013 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 03:04:06.275004 1169013 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 03:04:06.275015 1169013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/addons for local assets ...
	I0214 03:04:06.275074 1169013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18166-1129740/.minikube/files for local assets ...
	I0214 03:04:06.275170 1169013 filesync.go:149] local asset: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem -> 11350872.pem in /etc/ssl/certs
	I0214 03:04:06.275183 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem -> /etc/ssl/certs/11350872.pem
	I0214 03:04:06.275301 1169013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 03:04:06.284705 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem --> /etc/ssl/certs/11350872.pem (1708 bytes)
	I0214 03:04:06.308904 1169013 start.go:303] post-start completed in 149.173761ms
	I0214 03:04:06.309348 1169013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-089373
	I0214 03:04:06.325211 1169013 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/config.json ...
	I0214 03:04:06.325530 1169013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:04:06.325585 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:06.341572 1169013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa Username:docker}
	I0214 03:04:06.441371 1169013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 03:04:06.446549 1169013 start.go:128] duration metric: createHost completed in 10.194590387s
	I0214 03:04:06.446574 1169013 start.go:83] releasing machines lock for "ingress-addon-legacy-089373", held for 10.19471424s
	I0214 03:04:06.446648 1169013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-089373
	I0214 03:04:06.467141 1169013 ssh_runner.go:195] Run: cat /version.json
	I0214 03:04:06.467194 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:06.467199 1169013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 03:04:06.467269 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:06.485727 1169013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa Username:docker}
	I0214 03:04:06.508705 1169013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa Username:docker}
	I0214 03:04:06.575174 1169013 ssh_runner.go:195] Run: systemctl --version
	I0214 03:04:06.709770 1169013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 03:04:06.714543 1169013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0214 03:04:06.744220 1169013 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0214 03:04:06.744359 1169013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 03:04:06.774927 1169013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 03:04:06.774960 1169013 start.go:475] detecting cgroup driver to use...
	I0214 03:04:06.775013 1169013 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 03:04:06.775112 1169013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0214 03:04:06.787578 1169013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0214 03:04:06.798647 1169013 docker.go:217] disabling cri-docker service (if available) ...
	I0214 03:04:06.798762 1169013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 03:04:06.812135 1169013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 03:04:06.825771 1169013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 03:04:06.916356 1169013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 03:04:07.009596 1169013 docker.go:233] disabling docker service ...
	I0214 03:04:07.009668 1169013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 03:04:07.030618 1169013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 03:04:07.042651 1169013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 03:04:07.130794 1169013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 03:04:07.219065 1169013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 03:04:07.230607 1169013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 03:04:07.247896 1169013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0214 03:04:07.258259 1169013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0214 03:04:07.268472 1169013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0214 03:04:07.268579 1169013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0214 03:04:07.279992 1169013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:04:07.289662 1169013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0214 03:04:07.299260 1169013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:04:07.308949 1169013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 03:04:07.317420 1169013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0214 03:04:07.326671 1169013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 03:04:07.335158 1169013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 03:04:07.343305 1169013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:04:07.435739 1169013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 03:04:07.563519 1169013 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0214 03:04:07.563600 1169013 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0214 03:04:07.567298 1169013 start.go:543] Will wait 60s for crictl version
	I0214 03:04:07.567380 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:07.570887 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 03:04:07.607116 1169013 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0214 03:04:07.607201 1169013 ssh_runner.go:195] Run: containerd --version
	I0214 03:04:07.643597 1169013 ssh_runner.go:195] Run: containerd --version
	I0214 03:04:07.671734 1169013 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.26 ...
	I0214 03:04:07.673559 1169013 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-089373 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 03:04:07.688528 1169013 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 03:04:07.692006 1169013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 03:04:07.702554 1169013 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0214 03:04:07.702635 1169013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 03:04:07.737759 1169013 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0214 03:04:07.737838 1169013 ssh_runner.go:195] Run: which lz4
	I0214 03:04:07.741135 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0214 03:04:07.741284 1169013 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0214 03:04:07.744542 1169013 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 03:04:07.744625 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0214 03:04:09.913970 1169013 containerd.go:548] Took 2.172743 seconds to copy over tarball
	I0214 03:04:09.914068 1169013 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 03:04:12.739459 1169013 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.825357594s)
	I0214 03:04:12.739501 1169013 containerd.go:555] Took 2.825502 seconds to extract the tarball
	I0214 03:04:12.739513 1169013 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 03:04:12.854408 1169013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:04:12.945070 1169013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 03:04:13.080661 1169013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 03:04:13.121109 1169013 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0214 03:04:13.121133 1169013 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0214 03:04:13.121171 1169013 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:04:13.121393 1169013 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0214 03:04:13.121510 1169013 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:04:13.121601 1169013 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0214 03:04:13.121685 1169013 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 03:04:13.121769 1169013 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0214 03:04:13.121837 1169013 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0214 03:04:13.121913 1169013 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0214 03:04:13.123043 1169013 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0214 03:04:13.123451 1169013 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:04:13.123669 1169013 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0214 03:04:13.123820 1169013 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 03:04:13.123958 1169013 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0214 03:04:13.124083 1169013 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0214 03:04:13.124212 1169013 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:04:13.124447 1169013 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W0214 03:04:13.482645 1169013 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	W0214 03:04:13.482687 1169013 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0214 03:04:13.482869 1169013 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257"
	I0214 03:04:13.482952 1169013 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0214 03:04:13.482979 1169013 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c"
	I0214 03:04:13.483027 1169013 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0214 03:04:13.483711 1169013 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0214 03:04:13.483843 1169013 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03"
	I0214 03:04:13.483893 1169013 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0214 03:04:13.486715 1169013 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c"
	I0214 03:04:13.486778 1169013 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0214 03:04:13.487322 1169013 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 03:04:13.487424 1169013 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79"
	I0214 03:04:13.487466 1169013 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0214 03:04:13.511174 1169013 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 03:04:13.511341 1169013 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7"
	I0214 03:04:13.511430 1169013 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0214 03:04:13.518634 1169013 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 03:04:13.518885 1169013 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18"
	I0214 03:04:13.518973 1169013 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0214 03:04:13.623560 1169013 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0214 03:04:13.623721 1169013 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I0214 03:04:13.623801 1169013 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0214 03:04:14.154603 1169013 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0214 03:04:14.154779 1169013 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0214 03:04:14.154860 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:14.154714 1169013 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0214 03:04:14.154969 1169013 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0214 03:04:14.155012 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:14.256168 1169013 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0214 03:04:14.256267 1169013 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0214 03:04:14.256351 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:14.279831 1169013 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0214 03:04:14.279914 1169013 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0214 03:04:14.280000 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:14.281055 1169013 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0214 03:04:14.281120 1169013 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 03:04:14.281188 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:14.281355 1169013 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0214 03:04:14.281396 1169013 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:04:14.281465 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:14.283948 1169013 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0214 03:04:14.284016 1169013 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0214 03:04:14.284092 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:14.328606 1169013 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0214 03:04:14.328654 1169013 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:04:14.328773 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0214 03:04:14.328851 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0214 03:04:14.328911 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 03:04:14.328991 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:04:14.329053 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0214 03:04:14.329110 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0214 03:04:14.329170 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0214 03:04:14.329227 1169013 ssh_runner.go:195] Run: which crictl
	I0214 03:04:14.493040 1169013 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:04:14.493131 1169013 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0214 03:04:14.493212 1169013 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0214 03:04:14.493230 1169013 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0214 03:04:14.493288 1169013 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0214 03:04:14.493326 1169013 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0214 03:04:14.493334 1169013 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0214 03:04:14.493382 1169013 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0214 03:04:14.542994 1169013 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0214 03:04:14.543061 1169013 cache_images.go:92] LoadImages completed in 1.421914503s
	W0214 03:04:14.543123 1169013 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0214 03:04:14.543173 1169013 ssh_runner.go:195] Run: sudo crictl info
	I0214 03:04:14.582954 1169013 cni.go:84] Creating CNI manager for ""
	I0214 03:04:14.582980 1169013 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:04:14.582999 1169013 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 03:04:14.583017 1169013 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-089373 NodeName:ingress-addon-legacy-089373 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0214 03:04:14.583147 1169013 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-089373"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 03:04:14.583222 1169013 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-089373 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-089373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 03:04:14.583289 1169013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0214 03:04:14.592131 1169013 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 03:04:14.592211 1169013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 03:04:14.600807 1169013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0214 03:04:14.619379 1169013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0214 03:04:14.638164 1169013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0214 03:04:14.656436 1169013 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 03:04:14.659903 1169013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 03:04:14.671228 1169013 certs.go:56] Setting up /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373 for IP: 192.168.49.2
	I0214 03:04:14.671261 1169013 certs.go:190] acquiring lock for shared ca certs: {Name:mk121f32762802a204d98d3cbcae9456442a0756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:14.671402 1169013 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key
	I0214 03:04:14.671454 1169013 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key
	I0214 03:04:14.671574 1169013 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.key
	I0214 03:04:14.671585 1169013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt with IP's: []
	I0214 03:04:14.919351 1169013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt ...
	I0214 03:04:14.919384 1169013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: {Name:mk436b4827c1f39c0b8ef85da5cb8f6d4720105d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:14.919657 1169013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.key ...
	I0214 03:04:14.919677 1169013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.key: {Name:mkbb2bf4c4d02b36ae86ddfc295f46a195840e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:14.919772 1169013 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.key.dd3b5fb2
	I0214 03:04:14.919797 1169013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0214 03:04:15.494757 1169013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.crt.dd3b5fb2 ...
	I0214 03:04:15.494795 1169013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.crt.dd3b5fb2: {Name:mkb5e36922b8670d3570231675bda0fa30e245aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:15.494983 1169013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.key.dd3b5fb2 ...
	I0214 03:04:15.494997 1169013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.key.dd3b5fb2: {Name:mk0d55a6659a44e1fbee7cdc25718412785df40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:15.495105 1169013 certs.go:337] copying /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.crt
	I0214 03:04:15.495184 1169013 certs.go:341] copying /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.key
	I0214 03:04:15.495255 1169013 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.key
	I0214 03:04:15.495271 1169013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.crt with IP's: []
	I0214 03:04:15.950185 1169013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.crt ...
	I0214 03:04:15.950220 1169013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.crt: {Name:mk63ef473a1c9c9079e2629a5a1c569243b1b451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:15.950406 1169013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.key ...
	I0214 03:04:15.950422 1169013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.key: {Name:mkc50e08bd52838cfd2067d174fbfb9bea03498b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:15.950508 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0214 03:04:15.950531 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0214 03:04:15.950548 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0214 03:04:15.950565 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0214 03:04:15.950576 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0214 03:04:15.950592 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0214 03:04:15.950609 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0214 03:04:15.950622 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0214 03:04:15.950680 1169013 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087.pem (1338 bytes)
	W0214 03:04:15.950721 1169013 certs.go:433] ignoring /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087_empty.pem, impossibly tiny 0 bytes
	I0214 03:04:15.950739 1169013 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 03:04:15.950769 1169013 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/ca.pem (1082 bytes)
	I0214 03:04:15.950807 1169013 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/cert.pem (1123 bytes)
	I0214 03:04:15.950839 1169013 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/certs/key.pem (1675 bytes)
	I0214 03:04:15.950891 1169013 certs.go:437] found cert: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem (1708 bytes)
	I0214 03:04:15.950923 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:04:15.950941 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087.pem -> /usr/share/ca-certificates/1135087.pem
	I0214 03:04:15.950959 1169013 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem -> /usr/share/ca-certificates/11350872.pem
	I0214 03:04:15.951585 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 03:04:15.975055 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 03:04:15.998853 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 03:04:16.024875 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 03:04:16.051047 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 03:04:16.076788 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 03:04:16.101837 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 03:04:16.125620 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 03:04:16.150183 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 03:04:16.176796 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/certs/1135087.pem --> /usr/share/ca-certificates/1135087.pem (1338 bytes)
	I0214 03:04:16.200896 1169013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/ssl/certs/11350872.pem --> /usr/share/ca-certificates/11350872.pem (1708 bytes)
	I0214 03:04:16.224865 1169013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 03:04:16.242605 1169013 ssh_runner.go:195] Run: openssl version
	I0214 03:04:16.248242 1169013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11350872.pem && ln -fs /usr/share/ca-certificates/11350872.pem /etc/ssl/certs/11350872.pem"
	I0214 03:04:16.257574 1169013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11350872.pem
	I0214 03:04:16.260881 1169013 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 03:00 /usr/share/ca-certificates/11350872.pem
	I0214 03:04:16.260966 1169013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11350872.pem
	I0214 03:04:16.267829 1169013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11350872.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 03:04:16.276802 1169013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 03:04:16.285931 1169013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:04:16.289489 1169013 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:55 /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:04:16.289561 1169013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:04:16.296481 1169013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 03:04:16.306122 1169013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1135087.pem && ln -fs /usr/share/ca-certificates/1135087.pem /etc/ssl/certs/1135087.pem"
	I0214 03:04:16.315597 1169013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1135087.pem
	I0214 03:04:16.318961 1169013 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 03:00 /usr/share/ca-certificates/1135087.pem
	I0214 03:04:16.319072 1169013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1135087.pem
	I0214 03:04:16.325869 1169013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1135087.pem /etc/ssl/certs/51391683.0"
	I0214 03:04:16.335207 1169013 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 03:04:16.338578 1169013 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0214 03:04:16.338675 1169013 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-089373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-089373 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:04:16.338754 1169013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0214 03:04:16.338811 1169013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 03:04:16.374306 1169013 cri.go:89] found id: ""
	I0214 03:04:16.374403 1169013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 03:04:16.383320 1169013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 03:04:16.392928 1169013 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0214 03:04:16.395160 1169013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 03:04:16.405377 1169013 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 03:04:16.405421 1169013 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 03:04:16.458495 1169013 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0214 03:04:16.458577 1169013 kubeadm.go:322] [preflight] Running pre-flight checks
	I0214 03:04:16.510223 1169013 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0214 03:04:16.510354 1169013 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0214 03:04:16.510416 1169013 kubeadm.go:322] OS: Linux
	I0214 03:04:16.510476 1169013 kubeadm.go:322] CGROUPS_CPU: enabled
	I0214 03:04:16.510548 1169013 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0214 03:04:16.510620 1169013 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0214 03:04:16.510708 1169013 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0214 03:04:16.510811 1169013 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0214 03:04:16.510893 1169013 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0214 03:04:16.596157 1169013 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 03:04:16.596302 1169013 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 03:04:16.596425 1169013 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 03:04:16.820202 1169013 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 03:04:16.821719 1169013 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 03:04:16.821938 1169013 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0214 03:04:16.924736 1169013 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 03:04:16.927553 1169013 out.go:204]   - Generating certificates and keys ...
	I0214 03:04:16.927751 1169013 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0214 03:04:16.927837 1169013 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0214 03:04:17.189578 1169013 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 03:04:17.717645 1169013 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0214 03:04:18.231253 1169013 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0214 03:04:19.652623 1169013 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0214 03:04:20.086788 1169013 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0214 03:04:20.087162 1169013 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-089373 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 03:04:20.756349 1169013 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0214 03:04:20.759179 1169013 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-089373 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 03:04:21.509024 1169013 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 03:04:22.022423 1169013 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 03:04:22.270225 1169013 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0214 03:04:22.270884 1169013 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 03:04:22.444147 1169013 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 03:04:23.052349 1169013 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 03:04:23.650252 1169013 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 03:04:24.137811 1169013 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 03:04:24.138573 1169013 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 03:04:24.141491 1169013 out.go:204]   - Booting up control plane ...
	I0214 03:04:24.141597 1169013 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 03:04:24.159865 1169013 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 03:04:24.159944 1169013 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 03:04:24.160020 1169013 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 03:04:24.160813 1169013 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 03:04:35.663338 1169013 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502386 seconds
	I0214 03:04:35.663503 1169013 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 03:04:35.678574 1169013 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 03:04:36.200702 1169013 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 03:04:36.200853 1169013 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-089373 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0214 03:04:36.712024 1169013 kubeadm.go:322] [bootstrap-token] Using token: sz4i81.drs0v9b7hkckzdp3
	I0214 03:04:36.714797 1169013 out.go:204]   - Configuring RBAC rules ...
	I0214 03:04:36.714920 1169013 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 03:04:36.732572 1169013 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 03:04:36.754404 1169013 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 03:04:36.759275 1169013 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 03:04:36.769265 1169013 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 03:04:36.774597 1169013 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 03:04:36.784221 1169013 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 03:04:37.074959 1169013 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0214 03:04:37.153956 1169013 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0214 03:04:37.155724 1169013 kubeadm.go:322] 
	I0214 03:04:37.155802 1169013 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0214 03:04:37.155809 1169013 kubeadm.go:322] 
	I0214 03:04:37.155881 1169013 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0214 03:04:37.155887 1169013 kubeadm.go:322] 
	I0214 03:04:37.155911 1169013 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0214 03:04:37.155967 1169013 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 03:04:37.156014 1169013 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 03:04:37.156018 1169013 kubeadm.go:322] 
	I0214 03:04:37.156067 1169013 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0214 03:04:37.156136 1169013 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 03:04:37.156210 1169013 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 03:04:37.156226 1169013 kubeadm.go:322] 
	I0214 03:04:37.156304 1169013 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 03:04:37.156376 1169013 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0214 03:04:37.156381 1169013 kubeadm.go:322] 
	I0214 03:04:37.156460 1169013 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sz4i81.drs0v9b7hkckzdp3 \
	I0214 03:04:37.156559 1169013 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d3f320a98a2f1022ee1a4d9bbdd9d3ce0ce634a8fab1d54ded076f0a14b0e04e \
	I0214 03:04:37.156581 1169013 kubeadm.go:322]     --control-plane 
	I0214 03:04:37.156586 1169013 kubeadm.go:322] 
	I0214 03:04:37.156664 1169013 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0214 03:04:37.156669 1169013 kubeadm.go:322] 
	I0214 03:04:37.156745 1169013 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sz4i81.drs0v9b7hkckzdp3 \
	I0214 03:04:37.156842 1169013 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d3f320a98a2f1022ee1a4d9bbdd9d3ce0ce634a8fab1d54ded076f0a14b0e04e 
	I0214 03:04:37.160572 1169013 kubeadm.go:322] W0214 03:04:16.457902    1090 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0214 03:04:37.160904 1169013 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0214 03:04:37.161016 1169013 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 03:04:37.161159 1169013 kubeadm.go:322] W0214 03:04:24.155668    1090 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0214 03:04:37.161355 1169013 kubeadm.go:322] W0214 03:04:24.157289    1090 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0214 03:04:37.161408 1169013 cni.go:84] Creating CNI manager for ""
	I0214 03:04:37.161422 1169013 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 03:04:37.164099 1169013 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 03:04:37.166246 1169013 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 03:04:37.172439 1169013 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0214 03:04:37.172466 1169013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0214 03:04:37.192069 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 03:04:37.698596 1169013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 03:04:37.698687 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:37.698729 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=40f210e92693e4612e04be0697de06db21ac5cf0 minikube.k8s.io/name=ingress-addon-legacy-089373 minikube.k8s.io/updated_at=2024_02_14T03_04_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:37.866588 1169013 ops.go:34] apiserver oom_adj: -16
	I0214 03:04:37.866679 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:38.366816 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:38.867014 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:39.366887 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:39.867688 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:40.366891 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:40.866825 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:41.367043 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:41.867758 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:42.367585 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:42.867817 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:43.367465 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:43.867321 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:44.366848 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:44.866967 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:45.367548 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:45.867179 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:46.367623 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:46.867590 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:47.367355 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:47.866903 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:48.367090 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:48.867753 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:49.367797 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:49.866868 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:50.367511 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:50.867128 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:51.367632 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:51.867209 1169013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:04:52.158604 1169013 kubeadm.go:1088] duration metric: took 14.459985304s to wait for elevateKubeSystemPrivileges.
	I0214 03:04:52.158639 1169013 kubeadm.go:406] StartCluster complete in 35.819975382s
	I0214 03:04:52.158657 1169013 settings.go:142] acquiring lock: {Name:mkcc971fda27c724b3c1908f1b3da87aea10d784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:52.158728 1169013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:04:52.159445 1169013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/kubeconfig: {Name:mkc9d4ef83ac02b186254a828f8611428408dff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:04:52.160184 1169013 kapi.go:59] client config for ingress-addon-legacy-089373: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt", KeyFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.key", CAFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 03:04:52.160521 1169013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 03:04:52.160788 1169013 config.go:182] Loaded profile config "ingress-addon-legacy-089373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0214 03:04:52.160902 1169013 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 03:04:52.160982 1169013 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-089373"
	I0214 03:04:52.161021 1169013 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-089373"
	I0214 03:04:52.161073 1169013 host.go:66] Checking if "ingress-addon-legacy-089373" exists ...
	I0214 03:04:52.161617 1169013 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-089373 --format={{.State.Status}}
	I0214 03:04:52.162062 1169013 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-089373"
	I0214 03:04:52.162084 1169013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-089373"
	I0214 03:04:52.162354 1169013 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-089373 --format={{.State.Status}}
	I0214 03:04:52.163081 1169013 cert_rotation.go:137] Starting client certificate rotation controller
	I0214 03:04:52.205686 1169013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:04:52.208691 1169013 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 03:04:52.208713 1169013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 03:04:52.208800 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:52.216255 1169013 kapi.go:59] client config for ingress-addon-legacy-089373: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt", KeyFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.key", CAFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 03:04:52.216516 1169013 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-089373"
	I0214 03:04:52.216544 1169013 host.go:66] Checking if "ingress-addon-legacy-089373" exists ...
	I0214 03:04:52.217001 1169013 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-089373 --format={{.State.Status}}
	I0214 03:04:52.264570 1169013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa Username:docker}
	I0214 03:04:52.269235 1169013 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 03:04:52.269255 1169013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 03:04:52.269319 1169013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-089373
	I0214 03:04:52.295378 1169013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/ingress-addon-legacy-089373/id_rsa Username:docker}
	I0214 03:04:52.502621 1169013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 03:04:52.563088 1169013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 03:04:52.595197 1169013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 03:04:52.665901 1169013 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-089373" context rescaled to 1 replicas
	I0214 03:04:52.665954 1169013 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0214 03:04:52.668647 1169013 out.go:177] * Verifying Kubernetes components...
	I0214 03:04:52.672251 1169013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:04:53.145277 1169013 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0214 03:04:53.368512 1169013 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0214 03:04:53.367332 1169013 kapi.go:59] client config for ingress-addon-legacy-089373: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt", KeyFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.key", CAFile:"/home/jenkins/minikube-integration/18166-1129740/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 03:04:53.370389 1169013 addons.go:505] enable addons completed in 1.209468085s: enabled=[default-storageclass storage-provisioner]
	I0214 03:04:53.370660 1169013 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-089373" to be "Ready" ...
	I0214 03:04:53.381158 1169013 node_ready.go:49] node "ingress-addon-legacy-089373" has status "Ready":"True"
	I0214 03:04:53.381186 1169013 node_ready.go:38] duration metric: took 10.502479ms waiting for node "ingress-addon-legacy-089373" to be "Ready" ...
	I0214 03:04:53.381210 1169013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:04:53.396135 1169013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-55txq" in "kube-system" namespace to be "Ready" ...
	I0214 03:04:55.401385 1169013 pod_ready.go:102] pod "coredns-66bff467f8-55txq" in "kube-system" namespace has status "Ready":"False"
	I0214 03:04:57.402610 1169013 pod_ready.go:102] pod "coredns-66bff467f8-55txq" in "kube-system" namespace has status "Ready":"False"
	I0214 03:04:59.901802 1169013 pod_ready.go:102] pod "coredns-66bff467f8-55txq" in "kube-system" namespace has status "Ready":"False"
	I0214 03:05:00.899385 1169013 pod_ready.go:97] error getting pod "coredns-66bff467f8-55txq" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-55txq" not found
	I0214 03:05:00.899418 1169013 pod_ready.go:81] duration metric: took 7.503250703s waiting for pod "coredns-66bff467f8-55txq" in "kube-system" namespace to be "Ready" ...
	E0214 03:05:00.899435 1169013 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-55txq" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-55txq" not found
	I0214 03:05:00.899443 1169013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:02.905234 1169013 pod_ready.go:102] pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace has status "Ready":"False"
	I0214 03:05:04.905543 1169013 pod_ready.go:102] pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace has status "Ready":"False"
	I0214 03:05:07.405570 1169013 pod_ready.go:102] pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace has status "Ready":"False"
	I0214 03:05:09.905570 1169013 pod_ready.go:102] pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace has status "Ready":"False"
	I0214 03:05:12.405356 1169013 pod_ready.go:102] pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace has status "Ready":"False"
	I0214 03:05:14.406082 1169013 pod_ready.go:102] pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace has status "Ready":"False"
	I0214 03:05:15.905594 1169013 pod_ready.go:92] pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace has status "Ready":"True"
	I0214 03:05:15.905626 1169013 pod_ready.go:81] duration metric: took 15.006173959s waiting for pod "coredns-66bff467f8-7nmwh" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.905638 1169013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-089373" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.916609 1169013 pod_ready.go:92] pod "etcd-ingress-addon-legacy-089373" in "kube-system" namespace has status "Ready":"True"
	I0214 03:05:15.916636 1169013 pod_ready.go:81] duration metric: took 10.989735ms waiting for pod "etcd-ingress-addon-legacy-089373" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.916651 1169013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-089373" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.921728 1169013 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-089373" in "kube-system" namespace has status "Ready":"True"
	I0214 03:05:15.921760 1169013 pod_ready.go:81] duration metric: took 5.100126ms waiting for pod "kube-apiserver-ingress-addon-legacy-089373" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.921773 1169013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-089373" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.927034 1169013 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-089373" in "kube-system" namespace has status "Ready":"True"
	I0214 03:05:15.927062 1169013 pod_ready.go:81] duration metric: took 5.279879ms waiting for pod "kube-controller-manager-ingress-addon-legacy-089373" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.927075 1169013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b2sf6" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.932225 1169013 pod_ready.go:92] pod "kube-proxy-b2sf6" in "kube-system" namespace has status "Ready":"True"
	I0214 03:05:15.932250 1169013 pod_ready.go:81] duration metric: took 5.166823ms waiting for pod "kube-proxy-b2sf6" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:15.932262 1169013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-089373" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:16.100772 1169013 request.go:629] Waited for 168.441376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-089373
	I0214 03:05:16.300789 1169013 request.go:629] Waited for 197.223077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-089373
	I0214 03:05:16.303697 1169013 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-089373" in "kube-system" namespace has status "Ready":"True"
	I0214 03:05:16.303724 1169013 pod_ready.go:81] duration metric: took 371.453535ms waiting for pod "kube-scheduler-ingress-addon-legacy-089373" in "kube-system" namespace to be "Ready" ...
	I0214 03:05:16.303738 1169013 pod_ready.go:38] duration metric: took 22.92251695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:05:16.303758 1169013 api_server.go:52] waiting for apiserver process to appear ...
	I0214 03:05:16.303826 1169013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 03:05:16.314858 1169013 api_server.go:72] duration metric: took 23.648869732s to wait for apiserver process to appear ...
	I0214 03:05:16.314885 1169013 api_server.go:88] waiting for apiserver healthz status ...
	I0214 03:05:16.314916 1169013 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0214 03:05:16.323851 1169013 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0214 03:05:16.324743 1169013 api_server.go:141] control plane version: v1.18.20
	I0214 03:05:16.324769 1169013 api_server.go:131] duration metric: took 9.877726ms to wait for apiserver health ...
	I0214 03:05:16.324778 1169013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 03:05:16.501077 1169013 request.go:629] Waited for 176.234076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0214 03:05:16.506981 1169013 system_pods.go:59] 8 kube-system pods found
	I0214 03:05:16.507016 1169013 system_pods.go:61] "coredns-66bff467f8-7nmwh" [17f28afb-4c6b-4a66-a4e2-a326148ba38e] Running
	I0214 03:05:16.507022 1169013 system_pods.go:61] "etcd-ingress-addon-legacy-089373" [5a008477-8aa3-42ce-9fc4-d19cfb26b09a] Running
	I0214 03:05:16.507027 1169013 system_pods.go:61] "kindnet-ln8t9" [53f3a891-c831-425a-b74a-330111d07a32] Running
	I0214 03:05:16.507033 1169013 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-089373" [0406587e-f33e-4d61-8c97-28c968eed0a3] Running
	I0214 03:05:16.507073 1169013 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-089373" [bc97b21d-4a3b-493f-a13e-15eada42adf9] Running
	I0214 03:05:16.507080 1169013 system_pods.go:61] "kube-proxy-b2sf6" [e82a8c68-7e41-4551-9645-af8e8feb67ba] Running
	I0214 03:05:16.507085 1169013 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-089373" [3564de80-c636-45d3-985b-e5e9ce2d49ec] Running
	I0214 03:05:16.507093 1169013 system_pods.go:61] "storage-provisioner" [7487db13-23eb-49a7-b066-a71c6391a9a0] Running
	I0214 03:05:16.507098 1169013 system_pods.go:74] duration metric: took 182.315532ms to wait for pod list to return data ...
	I0214 03:05:16.507112 1169013 default_sa.go:34] waiting for default service account to be created ...
	I0214 03:05:16.700395 1169013 request.go:629] Waited for 193.150425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0214 03:05:16.702755 1169013 default_sa.go:45] found service account: "default"
	I0214 03:05:16.702785 1169013 default_sa.go:55] duration metric: took 195.665067ms for default service account to be created ...
	I0214 03:05:16.702797 1169013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 03:05:16.901201 1169013 request.go:629] Waited for 198.334634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0214 03:05:16.907211 1169013 system_pods.go:86] 8 kube-system pods found
	I0214 03:05:16.907251 1169013 system_pods.go:89] "coredns-66bff467f8-7nmwh" [17f28afb-4c6b-4a66-a4e2-a326148ba38e] Running
	I0214 03:05:16.907259 1169013 system_pods.go:89] "etcd-ingress-addon-legacy-089373" [5a008477-8aa3-42ce-9fc4-d19cfb26b09a] Running
	I0214 03:05:16.907264 1169013 system_pods.go:89] "kindnet-ln8t9" [53f3a891-c831-425a-b74a-330111d07a32] Running
	I0214 03:05:16.907270 1169013 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-089373" [0406587e-f33e-4d61-8c97-28c968eed0a3] Running
	I0214 03:05:16.907276 1169013 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-089373" [bc97b21d-4a3b-493f-a13e-15eada42adf9] Running
	I0214 03:05:16.907281 1169013 system_pods.go:89] "kube-proxy-b2sf6" [e82a8c68-7e41-4551-9645-af8e8feb67ba] Running
	I0214 03:05:16.907286 1169013 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-089373" [3564de80-c636-45d3-985b-e5e9ce2d49ec] Running
	I0214 03:05:16.907295 1169013 system_pods.go:89] "storage-provisioner" [7487db13-23eb-49a7-b066-a71c6391a9a0] Running
	I0214 03:05:16.907302 1169013 system_pods.go:126] duration metric: took 204.500477ms to wait for k8s-apps to be running ...
	I0214 03:05:16.907313 1169013 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 03:05:16.907376 1169013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:05:16.918891 1169013 system_svc.go:56] duration metric: took 11.56717ms WaitForService to wait for kubelet.
	I0214 03:05:16.918921 1169013 kubeadm.go:581] duration metric: took 24.252937706s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 03:05:16.918943 1169013 node_conditions.go:102] verifying NodePressure condition ...
	I0214 03:05:17.100340 1169013 request.go:629] Waited for 181.274479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0214 03:05:17.103367 1169013 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 03:05:17.103398 1169013 node_conditions.go:123] node cpu capacity is 2
	I0214 03:05:17.103410 1169013 node_conditions.go:105] duration metric: took 184.433944ms to run NodePressure ...
	I0214 03:05:17.103423 1169013 start.go:228] waiting for startup goroutines ...
	I0214 03:05:17.103430 1169013 start.go:233] waiting for cluster config update ...
	I0214 03:05:17.103443 1169013 start.go:242] writing updated cluster config ...
	I0214 03:05:17.103765 1169013 ssh_runner.go:195] Run: rm -f paused
	I0214 03:05:17.157558 1169013 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0214 03:05:17.160228 1169013 out.go:177] 
	W0214 03:05:17.164066 1169013 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0214 03:05:17.166103 1169013 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0214 03:05:17.168402 1169013 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-089373" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4a742e565f38       dd1b12fcb6097       15 seconds ago       Exited              hello-world-app           2                   4255a64486fc6       hello-world-app-5f5d8b66bb-kggzs
	27003c09c4aa2       d315ef79be32c       40 seconds ago       Running             nginx                     0                   236e1b26cb88d       nginx
	f8692d5230a8c       d7f0cba3aa5bf       54 seconds ago       Exited              controller                0                   16082d6ca0883       ingress-nginx-controller-7fcf777cb7-n8mgg
	0d96f29ff82f5       a883f7fc35610       59 seconds ago       Exited              patch                     0                   2aaf4836100f1       ingress-nginx-admission-patch-gx4jz
	d075ec6ed98af       a883f7fc35610       59 seconds ago       Exited              create                    0                   dbeedf0cc528e       ingress-nginx-admission-create-7l4br
	f3ad81d66c776       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   7152c1f8be3e6       coredns-66bff467f8-7nmwh
	9b46150d1a67f       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   c76919240ddc9       storage-provisioner
	4c0743c945285       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   3625fa5f3ab63       kindnet-ln8t9
	c0c5f26ef8209       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   0749c0f354758       kube-proxy-b2sf6
	e3d5af726dc11       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   b040855cac5a9       kube-controller-manager-ingress-addon-legacy-089373
	61a1b3c733296       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   87476f5645990       etcd-ingress-addon-legacy-089373
	5d3dea4473f1f       095f37015706d       About a minute ago   Running             kube-scheduler            0                   523e565fcb25b       kube-scheduler-ingress-addon-legacy-089373
	303dd30dd53f2       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   1f3d8b6f3838f       kube-apiserver-ingress-addon-legacy-089373
	
	
	==> containerd <==
	Feb 14 03:06:04 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:04.552354475Z" level=info msg="TearDown network for sandbox \"3b8204a0bba7b09639cf0ca40c71df4ba132ddd8e35e5ee31ba39d000ca29ceb\" successfully"
	Feb 14 03:06:04 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:04.552525547Z" level=info msg="StopPodSandbox for \"3b8204a0bba7b09639cf0ca40c71df4ba132ddd8e35e5ee31ba39d000ca29ceb\" returns successfully"
	Feb 14 03:06:11 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:11.476948645Z" level=info msg="StopContainer for \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\" with timeout 2 (s)"
	Feb 14 03:06:11 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:11.477444262Z" level=info msg="Stop container \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\" with signal terminated"
	Feb 14 03:06:11 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:11.483816926Z" level=info msg="StopContainer for \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\" with timeout 2 (s)"
	Feb 14 03:06:11 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:11.499879501Z" level=info msg="Skipping the sending of signal terminated to container \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\" because a prior stop with timeout>0 request already sent the signal"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.500399104Z" level=info msg="Kill container \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\""
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.500428945Z" level=info msg="Kill container \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\""
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.583808611Z" level=info msg="shim disconnected" id=f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.583875071Z" level=warning msg="cleaning up after shim disconnected" id=f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4 namespace=k8s.io
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.583885795Z" level=info msg="cleaning up dead shim"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.592383255Z" level=warning msg="cleanup warnings time=\"2024-02-14T03:06:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4584 runtime=io.containerd.runc.v2\ntime=\"2024-02-14T03:06:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.595359883Z" level=info msg="StopContainer for \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\" returns successfully"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.595359949Z" level=info msg="StopContainer for \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\" returns successfully"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.596218599Z" level=info msg="StopPodSandbox for \"16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893\""
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.596296324Z" level=info msg="Container to stop \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.596501840Z" level=info msg="StopPodSandbox for \"16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893\""
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.596538540Z" level=info msg="Container to stop \"f8692d5230a8c76a49689d783f3ca75c309aede68760b910d364464ad4c263c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.627686184Z" level=info msg="shim disconnected" id=16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.628567718Z" level=warning msg="cleaning up after shim disconnected" id=16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893 namespace=k8s.io
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.628602490Z" level=info msg="cleaning up dead shim"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.637091795Z" level=warning msg="cleanup warnings time=\"2024-02-14T03:06:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4623 runtime=io.containerd.runc.v2\n"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.665047868Z" level=error msg="StopPodSandbox for \"16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893\" failed" error="failed to destroy network for sandbox \"16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-c7abc03e558ff4103ad59 --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.704315819Z" level=info msg="TearDown network for sandbox \"16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893\" successfully"
	Feb 14 03:06:13 ingress-addon-legacy-089373 containerd[826]: time="2024-02-14T03:06:13.704366656Z" level=info msg="StopPodSandbox for \"16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893\" returns successfully"
	
	
	==> coredns [f3ad81d66c7769e7fc5fa1ba55347afc5e162fc0a98a8f8d270e6335ea3e0453] <==
	[INFO] 10.244.0.5:56986 - 44924 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000080547s
	[INFO] 10.244.0.5:43226 - 17063 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002422764s
	[INFO] 10.244.0.5:56986 - 34835 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058878s
	[INFO] 10.244.0.5:56986 - 30001 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045997s
	[INFO] 10.244.0.5:39275 - 65413 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033813s
	[INFO] 10.244.0.5:39275 - 29436 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051896s
	[INFO] 10.244.0.5:56986 - 6157 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001188873s
	[INFO] 10.244.0.5:39275 - 6216 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048196s
	[INFO] 10.244.0.5:39275 - 63091 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060823s
	[INFO] 10.244.0.5:38646 - 42577 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042518s
	[INFO] 10.244.0.5:38646 - 2058 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032655s
	[INFO] 10.244.0.5:38646 - 26355 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033082s
	[INFO] 10.244.0.5:38646 - 27893 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031548s
	[INFO] 10.244.0.5:38646 - 62421 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030055s
	[INFO] 10.244.0.5:38646 - 64136 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008529s
	[INFO] 10.244.0.5:39275 - 49819 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001514607s
	[INFO] 10.244.0.5:56986 - 11916 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002100427s
	[INFO] 10.244.0.5:38646 - 45280 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002028561s
	[INFO] 10.244.0.5:56986 - 41595 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00027305s
	[INFO] 10.244.0.5:43226 - 61860 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001607076s
	[INFO] 10.244.0.5:43226 - 7490 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049123s
	[INFO] 10.244.0.5:39275 - 51274 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000950265s
	[INFO] 10.244.0.5:39275 - 8001 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004443s
	[INFO] 10.244.0.5:38646 - 50635 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00116662s
	[INFO] 10.244.0.5:38646 - 27433 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000431s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-089373
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-089373
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40f210e92693e4612e04be0697de06db21ac5cf0
	                    minikube.k8s.io/name=ingress-addon-legacy-089373
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T03_04_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 03:04:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-089373
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 03:06:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 03:06:10 +0000   Wed, 14 Feb 2024 03:04:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 03:06:10 +0000   Wed, 14 Feb 2024 03:04:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 03:06:10 +0000   Wed, 14 Feb 2024 03:04:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 03:06:10 +0000   Wed, 14 Feb 2024 03:04:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-089373
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 322445cd95ed408793d8e07beb43920e
	  System UUID:                c567c286-1d48-456d-aaa9-c88ab6425e9a
	  Boot ID:                    b6f8a130-5377-4a84-9795-3edbfc6d2fc5
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-kggzs                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 coredns-66bff467f8-7nmwh                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     87s
	  kube-system                 etcd-ingress-addon-legacy-089373                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-ln8t9                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-089373             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-089373    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-b2sf6                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-ingress-addon-legacy-089373             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  113s (x5 over 113s)  kubelet     Node ingress-addon-legacy-089373 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x5 over 113s)  kubelet     Node ingress-addon-legacy-089373 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x5 over 113s)  kubelet     Node ingress-addon-legacy-089373 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-089373 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-089373 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-089373 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-089373 status is now: NodeReady
	  Normal  Starting                 86s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001181] FS-Cache: O-key=[8] 'e1d6c90000000000'
	[  +0.000737] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009241f644
	[  +0.001040] FS-Cache: N-key=[8] 'e1d6c90000000000'
	[  +0.002938] FS-Cache: Duplicate cookie detected
	[  +0.000771] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000955] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=000000009bfcc117
	[  +0.001146] FS-Cache: O-key=[8] 'e1d6c90000000000'
	[  +0.000751] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=00000000a3b785fa
	[  +0.001043] FS-Cache: N-key=[8] 'e1d6c90000000000'
	[Feb14 03:03] FS-Cache: Duplicate cookie detected
	[  +0.000763] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=000000009229b016
	[  +0.001160] FS-Cache: O-key=[8] 'e0d6c90000000000'
	[  +0.000800] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001049] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=000000009241f644
	[  +0.001135] FS-Cache: N-key=[8] 'e0d6c90000000000'
	[  +0.356910] FS-Cache: Duplicate cookie detected
	[  +0.000800] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001136] FS-Cache: O-cookie d=00000000c78ed886{9p.inode} n=000000003c7bf442
	[  +0.001171] FS-Cache: O-key=[8] 'e6d6c90000000000'
	[  +0.000822] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001046] FS-Cache: N-cookie d=00000000c78ed886{9p.inode} n=00000000d6dd9a7d
	[  +0.001173] FS-Cache: N-key=[8] 'e6d6c90000000000'
	
	
	==> etcd [61a1b3c733296ee892c125b5f1249a4c16729d8dcbe61c59944e4e82cff656ba] <==
	raft2024/02/14 03:04:29 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/02/14 03:04:29 INFO: aec36adc501070cc became follower at term 1
	raft2024/02/14 03:04:29 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-14 03:04:29.149201 W | auth: simple token is not cryptographically signed
	2024-02-14 03:04:29.152112 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-14 03:04:29.156462 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-14 03:04:29.156851 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-02-14 03:04:29.157084 I | embed: listening for peers on 192.168.49.2:2380
	2024-02-14 03:04:29.157267 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/14 03:04:29 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-14 03:04:29.158024 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2024/02/14 03:04:29 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/02/14 03:04:29 INFO: aec36adc501070cc became candidate at term 2
	raft2024/02/14 03:04:29 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/02/14 03:04:29 INFO: aec36adc501070cc became leader at term 2
	raft2024/02/14 03:04:29 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-02-14 03:04:29.845253 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-14 03:04:29.846415 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-14 03:04:29.846603 I | etcdserver/api: enabled capabilities for version 3.4
	2024-02-14 03:04:29.846718 I | etcdserver: published {Name:ingress-addon-legacy-089373 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-02-14 03:04:29.846950 I | embed: ready to serve client requests
	2024-02-14 03:04:29.847502 I | embed: ready to serve client requests
	2024-02-14 03:04:29.848785 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-14 03:04:29.853759 I | embed: serving client requests on 192.168.49.2:2379
	2024-02-14 03:04:52.271393 W | etcdserver: read-only range request "key:\"/registry/clusterroles/admin\" " with result "range_response_count:1 size:3325" took too long (120.410797ms) to execute
	
	
	==> kernel <==
	 03:06:19 up  5:48,  0 users,  load average: 1.30, 1.71, 1.84
	Linux ingress-addon-legacy-089373 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [4c0743c9452853e1bd4fc0580b64bd66f41a653468c3ccf8c2ecf809e2ed1914] <==
	I0214 03:04:54.406432       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0214 03:04:54.406502       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0214 03:04:54.406617       1 main.go:116] setting mtu 1500 for CNI 
	I0214 03:04:54.406707       1 main.go:146] kindnetd IP family: "ipv4"
	I0214 03:04:54.406763       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 03:04:54.706086       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:04:54.706308       1 main.go:227] handling current node
	I0214 03:05:04.721335       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:05:04.721365       1 main.go:227] handling current node
	I0214 03:05:14.731775       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:05:14.731806       1 main.go:227] handling current node
	I0214 03:05:24.742546       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:05:24.742577       1 main.go:227] handling current node
	I0214 03:05:34.753852       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:05:34.753881       1 main.go:227] handling current node
	I0214 03:05:44.757857       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:05:44.757888       1 main.go:227] handling current node
	I0214 03:05:54.761767       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:05:54.761797       1 main.go:227] handling current node
	I0214 03:06:04.770260       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:06:04.770290       1 main.go:227] handling current node
	I0214 03:06:14.780301       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0214 03:06:14.780333       1 main.go:227] handling current node
	
	
	==> kube-apiserver [303dd30dd53f2e932ef3d8e8b1c5dd025960cec35810aa812c829397f3c58273] <==
	I0214 03:04:33.755006       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0214 03:04:33.813405       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0214 03:04:33.936931       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0214 03:04:33.941190       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 03:04:34.022606       1 cache.go:39] Caches are synced for autoregister controller
	I0214 03:04:34.025526       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 03:04:34.028396       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0214 03:04:34.720898       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0214 03:04:34.720941       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0214 03:04:34.726296       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0214 03:04:34.729965       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0214 03:04:34.729993       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0214 03:04:35.184946       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 03:04:35.226527       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0214 03:04:35.312039       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0214 03:04:35.313212       1 controller.go:609] quota admission added evaluator for: endpoints
	I0214 03:04:35.317289       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 03:04:36.183461       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0214 03:04:37.039285       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0214 03:04:37.125992       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0214 03:04:40.498161       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 03:04:51.999028       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0214 03:04:52.122212       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0214 03:05:18.036853       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0214 03:05:36.320153       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [e3d5af726dc115a1ef7c7362b9ca7bf76b9e7773a51f19d6fba34f7ce82839cd] <==
	I0214 03:04:52.224003       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d2ef5206-2392-4030-a473-dc18bc818f4e", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-55txq
	E0214 03:04:52.274055       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0214 03:04:52.305665       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"7789582d-b8f4-43f3-aeda-a09ba2c272f8", ResourceVersion:"213", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63843476677, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001a1e220), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4001a1e240)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001a1e260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40019a7d80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4001a1e280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a1e2a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a1e2e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40019ea780), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40019875e8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40003f3ea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000e360)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001987638)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0214 03:04:52.306634       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"0668de4f-1f1e-4ce0-a27b-103446f99bef", ResourceVersion:"236", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63843476677, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001a1e340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001a1e360)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001a1e380), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a1e3a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a1e3c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a1e3e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a1e400)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a1e440)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40019ea910), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001987838), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000542000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000e368)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001987880)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0214 03:04:52.320239       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d2ef5206-2392-4030-a473-dc18bc818f4e", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7nmwh
	I0214 03:04:52.347076       1 shared_informer.go:230] Caches are synced for stateful set 
	I0214 03:04:52.405337       1 shared_informer.go:230] Caches are synced for attach detach 
	I0214 03:04:52.427246       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"cf6175d6-653f-4924-ba11-c81e02e2673b", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0214 03:04:52.429978       1 shared_informer.go:230] Caches are synced for disruption 
	I0214 03:04:52.430010       1 disruption.go:339] Sending events to api server.
	I0214 03:04:52.459449       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d2ef5206-2392-4030-a473-dc18bc818f4e", APIVersion:"apps/v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-55txq
	I0214 03:04:52.504822       1 shared_informer.go:230] Caches are synced for resource quota 
	I0214 03:04:52.517752       1 shared_informer.go:230] Caches are synced for resource quota 
	I0214 03:04:52.525822       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0214 03:04:52.570560       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0214 03:04:52.570583       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0214 03:05:18.020023       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b14df601-d6ba-4f5f-b0ee-a1f41d78f755", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0214 03:05:18.042658       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"85c66759-4c4b-4795-a53c-50523886e1e2", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-n8mgg
	I0214 03:05:18.089751       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7b9b3a30-9b24-4e5a-87fb-cba89c0038d5", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-7l4br
	I0214 03:05:18.143771       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5f339aac-3788-4666-adfd-06736d60cfe3", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-gx4jz
	I0214 03:05:20.690951       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7b9b3a30-9b24-4e5a-87fb-cba89c0038d5", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0214 03:05:20.719205       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5f339aac-3788-4666-adfd-06736d60cfe3", APIVersion:"batch/v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0214 03:05:45.131282       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"fe46b50d-2edd-4339-b4e4-b3d8921d5f19", APIVersion:"apps/v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0214 03:05:45.139867       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"87c54be8-0ca0-4059-88f8-d747492f93c7", APIVersion:"apps/v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-kggzs
	E0214 03:06:16.260840       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-5znvd" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [c0c5f26ef82096ac6b473be5634e8c62947aa855da764a67c74de91fc47d8f08] <==
	W0214 03:04:53.004699       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0214 03:04:53.020661       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0214 03:04:53.020695       1 server_others.go:186] Using iptables Proxier.
	I0214 03:04:53.020968       1 server.go:583] Version: v1.18.20
	I0214 03:04:53.021924       1 config.go:315] Starting service config controller
	I0214 03:04:53.021969       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0214 03:04:53.022066       1 config.go:133] Starting endpoints config controller
	I0214 03:04:53.022071       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0214 03:04:53.122176       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0214 03:04:53.122272       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [5d3dea4473f1f58127a57eee454635b059149a82513bf31320d2dfc9d0d49259] <==
	W0214 03:04:33.884599       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 03:04:33.884607       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 03:04:33.884613       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 03:04:33.929876       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0214 03:04:33.929916       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0214 03:04:33.933983       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0214 03:04:33.934386       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 03:04:33.934550       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 03:04:33.935067       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0214 03:04:33.949922       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 03:04:33.950861       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 03:04:33.950900       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0214 03:04:33.951130       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0214 03:04:33.951274       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 03:04:33.951350       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0214 03:04:33.951498       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 03:04:33.951578       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 03:04:33.951692       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 03:04:33.951809       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 03:04:33.951950       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 03:04:33.952083       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 03:04:34.767329       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 03:04:34.791074       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 03:04:34.793313       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0214 03:04:37.135122       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Feb 14 03:05:49 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:05:49.822594    1640 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ef905b5cb62b5e713e6ca77f17dd9fefe9acd1483a6ffb011777de98c7326012
	Feb 14 03:05:49 ingress-addon-legacy-089373 kubelet[1640]: E0214 03:05:49.822858    1640 pod_workers.go:191] Error syncing pod 6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d ("hello-world-app-5f5d8b66bb-kggzs_default(6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-kggzs_default(6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d)"
	Feb 14 03:06:01 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:01.126917    1640 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-8n6tt" (UniqueName: "kubernetes.io/secret/a4ce6611-b5c6-472c-ad6d-e3ac0452a655-minikube-ingress-dns-token-8n6tt") pod "a4ce6611-b5c6-472c-ad6d-e3ac0452a655" (UID: "a4ce6611-b5c6-472c-ad6d-e3ac0452a655")
	Feb 14 03:06:01 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:01.131156    1640 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ce6611-b5c6-472c-ad6d-e3ac0452a655-minikube-ingress-dns-token-8n6tt" (OuterVolumeSpecName: "minikube-ingress-dns-token-8n6tt") pod "a4ce6611-b5c6-472c-ad6d-e3ac0452a655" (UID: "a4ce6611-b5c6-472c-ad6d-e3ac0452a655"). InnerVolumeSpecName "minikube-ingress-dns-token-8n6tt". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 03:06:01 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:01.228377    1640 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-8n6tt" (UniqueName: "kubernetes.io/secret/a4ce6611-b5c6-472c-ad6d-e3ac0452a655-minikube-ingress-dns-token-8n6tt") on node "ingress-addon-legacy-089373" DevicePath ""
	Feb 14 03:06:02 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:02.846678    1640 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 79226c03c8b6f8e65a4841a7663d99b06bc79357abba7d31e28cae19772c9430
	Feb 14 03:06:03 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:03.547140    1640 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ef905b5cb62b5e713e6ca77f17dd9fefe9acd1483a6ffb011777de98c7326012
	Feb 14 03:06:03 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:03.852523    1640 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ef905b5cb62b5e713e6ca77f17dd9fefe9acd1483a6ffb011777de98c7326012
	Feb 14 03:06:03 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:03.852880    1640 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d4a742e565f38de707c98abcdf54c9bef48a64f18519a0d0d869d1d31624b7a7
	Feb 14 03:06:03 ingress-addon-legacy-089373 kubelet[1640]: E0214 03:06:03.853155    1640 pod_workers.go:191] Error syncing pod 6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d ("hello-world-app-5f5d8b66bb-kggzs_default(6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-kggzs_default(6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d)"
	Feb 14 03:06:11 ingress-addon-legacy-089373 kubelet[1640]: E0214 03:06:11.484175    1640 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-n8mgg.17b39bc90ad7bc99", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-n8mgg", UID:"76603ccd-dda6-4276-a578-b5138f8b5dbe", APIVersion:"v1", ResourceVersion:"483", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-089373"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b2968dc621e99, ext:94516298320, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b2968dc621e99, ext:94516298320, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-n8mgg.17b39bc90ad7bc99" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 14 03:06:11 ingress-addon-legacy-089373 kubelet[1640]: E0214 03:06:11.522998    1640 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-n8mgg.17b39bc90ad7bc99", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-n8mgg", UID:"76603ccd-dda6-4276-a578-b5138f8b5dbe", APIVersion:"v1", ResourceVersion:"483", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-089373"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b2968dc621e99, ext:94516298320, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b2968dc86217f, ext:94518658359, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-n8mgg.17b39bc90ad7bc99" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 14 03:06:13 ingress-addon-legacy-089373 kubelet[1640]: E0214 03:06:13.665323    1640 remote_runtime.go:128] StopPodSandbox "16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893": plugin type="portmap" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-c7abc03e558ff4103ad59 --wait]: exit status 1: iptables: No chain/target/match by that name.
	Feb 14 03:06:13 ingress-addon-legacy-089373 kubelet[1640]: E0214 03:06:13.665392    1640 kuberuntime_manager.go:912] Failed to stop sandbox {"containerd" "16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893"}
	Feb 14 03:06:13 ingress-addon-legacy-089373 kubelet[1640]: E0214 03:06:13.665435    1640 kubelet_pods.go:1235] Failed killing the pod "ingress-nginx-controller-7fcf777cb7-n8mgg": failed to "KillPodSandbox" for "76603ccd-dda6-4276-a578-b5138f8b5dbe" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-c7abc03e558ff4103ad59 --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Feb 14 03:06:13 ingress-addon-legacy-089373 kubelet[1640]: W0214 03:06:13.874480    1640 pod_container_deletor.go:77] Container "16082d6ca0883051717006cc823884714691d3b410e1479a4060c42597825893" not found in pod's containers
	Feb 14 03:06:15 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:15.547030    1640 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d4a742e565f38de707c98abcdf54c9bef48a64f18519a0d0d869d1d31624b7a7
	Feb 14 03:06:15 ingress-addon-legacy-089373 kubelet[1640]: E0214 03:06:15.547314    1640 pod_workers.go:191] Error syncing pod 6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d ("hello-world-app-5f5d8b66bb-kggzs_default(6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-kggzs_default(6e72f71c-8a2b-41df-8ca1-4ba7a8e98c7d)"
	Feb 14 03:06:15 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:15.579074    1640 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/76603ccd-dda6-4276-a578-b5138f8b5dbe-webhook-cert") pod "76603ccd-dda6-4276-a578-b5138f8b5dbe" (UID: "76603ccd-dda6-4276-a578-b5138f8b5dbe")
	Feb 14 03:06:15 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:15.579152    1640 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-cfdvj" (UniqueName: "kubernetes.io/secret/76603ccd-dda6-4276-a578-b5138f8b5dbe-ingress-nginx-token-cfdvj") pod "76603ccd-dda6-4276-a578-b5138f8b5dbe" (UID: "76603ccd-dda6-4276-a578-b5138f8b5dbe")
	Feb 14 03:06:15 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:15.585312    1640 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76603ccd-dda6-4276-a578-b5138f8b5dbe-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "76603ccd-dda6-4276-a578-b5138f8b5dbe" (UID: "76603ccd-dda6-4276-a578-b5138f8b5dbe"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 03:06:15 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:15.585472    1640 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76603ccd-dda6-4276-a578-b5138f8b5dbe-ingress-nginx-token-cfdvj" (OuterVolumeSpecName: "ingress-nginx-token-cfdvj") pod "76603ccd-dda6-4276-a578-b5138f8b5dbe" (UID: "76603ccd-dda6-4276-a578-b5138f8b5dbe"). InnerVolumeSpecName "ingress-nginx-token-cfdvj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 03:06:15 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:15.679598    1640 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/76603ccd-dda6-4276-a578-b5138f8b5dbe-webhook-cert") on node "ingress-addon-legacy-089373" DevicePath ""
	Feb 14 03:06:15 ingress-addon-legacy-089373 kubelet[1640]: I0214 03:06:15.679807    1640 reconciler.go:319] Volume detached for volume "ingress-nginx-token-cfdvj" (UniqueName: "kubernetes.io/secret/76603ccd-dda6-4276-a578-b5138f8b5dbe-ingress-nginx-token-cfdvj") on node "ingress-addon-legacy-089373" DevicePath ""
	Feb 14 03:06:16 ingress-addon-legacy-089373 kubelet[1640]: W0214 03:06:16.552899    1640 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/76603ccd-dda6-4276-a578-b5138f8b5dbe/volumes" does not exist
	
	
	==> storage-provisioner [9b46150d1a67f43ff2e98baa37b1e690926fbce6e2ed0b9872959f54468f690a] <==
	I0214 03:04:55.734179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 03:04:55.745933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 03:04:55.746448       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 03:04:55.753300       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 03:04:55.753855       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49afb9ab-0658-451e-af5c-42b7e04cd584", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-089373_f4c7e21f-8aaf-41dd-ac4e-f909287878d3 became leader
	I0214 03:04:55.754026       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-089373_f4c7e21f-8aaf-41dd-ac4e-f909287878d3!
	I0214 03:04:55.855174       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-089373_f4c7e21f-8aaf-41dd-ac4e-f909287878d3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-089373 -n ingress-addon-legacy-089373
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-089373 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (53.48s)

                                                
                                    

Test pass (279/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 28.24
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
9 TestDownloadOnly/v1.16.0/DeleteAll 0.21
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 22.21
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.22
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 32.24
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.56
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 137.22
38 TestAddons/parallel/Registry 17.92
40 TestAddons/parallel/InspektorGadget 11.99
41 TestAddons/parallel/MetricsServer 6.82
45 TestAddons/parallel/Headlamp 11.02
46 TestAddons/parallel/CloudSpanner 5.81
47 TestAddons/parallel/LocalPath 9.62
48 TestAddons/parallel/NvidiaDevicePlugin 6.66
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.16
53 TestAddons/StoppedEnableDisable 12.32
54 TestCertOptions 40.14
55 TestCertExpiration 230.57
57 TestForceSystemdFlag 45.74
58 TestForceSystemdEnv 42.36
59 TestDockerEnvContainerd 47.35
64 TestErrorSpam/setup 30.42
65 TestErrorSpam/start 0.8
66 TestErrorSpam/status 1.01
67 TestErrorSpam/pause 1.72
68 TestErrorSpam/unpause 1.85
69 TestErrorSpam/stop 1.49
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 59.02
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.59
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.04
81 TestFunctional/serial/CacheCmd/cache/add_local 1.48
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.16
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.18
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
91 TestFunctional/serial/LogsCmd 1.52
92 TestFunctional/serial/LogsFileCmd 1.7
93 TestFunctional/serial/InvalidService 4.31
95 TestFunctional/parallel/ConfigCmd 0.59
96 TestFunctional/parallel/DashboardCmd 11.6
97 TestFunctional/parallel/DryRun 0.48
98 TestFunctional/parallel/InternationalLanguage 0.21
99 TestFunctional/parallel/StatusCmd 1.2
103 TestFunctional/parallel/ServiceCmdConnect 8.69
104 TestFunctional/parallel/AddonsCmd 0.19
105 TestFunctional/parallel/PersistentVolumeClaim 25.59
107 TestFunctional/parallel/SSHCmd 0.77
108 TestFunctional/parallel/CpCmd 2.68
110 TestFunctional/parallel/FileSync 0.38
111 TestFunctional/parallel/CertSync 2.25
115 TestFunctional/parallel/NodeLabels 0.12
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
119 TestFunctional/parallel/License 0.47
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.52
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
132 TestFunctional/parallel/ServiceCmd/List 0.54
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
135 TestFunctional/parallel/ProfileCmd/profile_list 0.51
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
138 TestFunctional/parallel/ServiceCmd/Format 0.57
139 TestFunctional/parallel/MountCmd/any-port 6.91
140 TestFunctional/parallel/ServiceCmd/URL 0.56
141 TestFunctional/parallel/MountCmd/specific-port 2.23
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.55
143 TestFunctional/parallel/Version/short 0.08
144 TestFunctional/parallel/Version/components 1.4
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.7
150 TestFunctional/parallel/ImageCommands/Setup 1.77
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 105.89
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.98
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.63
174 TestJSONOutput/start/Command 78.63
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.77
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.69
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.77
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.24
199 TestKicCustomNetwork/create_custom_network 52.67
200 TestKicCustomNetwork/use_default_bridge_network 33.7
201 TestKicExistingNetwork 33.18
202 TestKicCustomSubnet 36.57
203 TestKicStaticIP 38.11
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 69.21
208 TestMountStart/serial/StartWithMountFirst 9.29
209 TestMountStart/serial/VerifyMountFirst 0.28
210 TestMountStart/serial/StartWithMountSecond 5.93
211 TestMountStart/serial/VerifyMountSecond 0.27
212 TestMountStart/serial/DeleteFirst 1.62
213 TestMountStart/serial/VerifyMountPostDelete 0.28
214 TestMountStart/serial/Stop 1.2
215 TestMountStart/serial/RestartStopped 7.5
216 TestMountStart/serial/VerifyMountPostStop 0.29
219 TestMultiNode/serial/FreshStart2Nodes 78.32
220 TestMultiNode/serial/DeployApp2Nodes 6.19
221 TestMultiNode/serial/PingHostFrom2Pods 1.06
222 TestMultiNode/serial/AddNode 31.59
223 TestMultiNode/serial/MultiNodeLabels 0.1
224 TestMultiNode/serial/ProfileList 0.35
225 TestMultiNode/serial/CopyFile 10.71
226 TestMultiNode/serial/StopNode 2.33
227 TestMultiNode/serial/StartAfterStop 11.95
228 TestMultiNode/serial/RestartKeepsNodes 117.39
229 TestMultiNode/serial/DeleteNode 5.04
230 TestMultiNode/serial/StopMultiNode 23.95
231 TestMultiNode/serial/RestartMultiNode 86.12
232 TestMultiNode/serial/ValidateNameConflict 33.23
237 TestPreload 149.76
239 TestScheduledStopUnix 106.45
242 TestInsufficientStorage 11.55
243 TestRunningBinaryUpgrade 87.07
245 TestKubernetesUpgrade 385.38
246 TestMissingContainerUpgrade 167.3
248 TestPause/serial/Start 95.95
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
251 TestNoKubernetes/serial/StartWithK8s 42.8
252 TestNoKubernetes/serial/StartWithStopK8s 16.36
253 TestNoKubernetes/serial/Start 7.79
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
255 TestNoKubernetes/serial/ProfileList 0.96
256 TestNoKubernetes/serial/Stop 1.23
257 TestNoKubernetes/serial/StartNoArgs 6.52
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
259 TestPause/serial/SecondStartNoReconfiguration 7.51
260 TestPause/serial/Pause 0.95
261 TestPause/serial/VerifyStatus 0.48
262 TestPause/serial/Unpause 0.96
263 TestPause/serial/PauseAgain 1.34
264 TestPause/serial/DeletePaused 3.18
265 TestPause/serial/VerifyDeletedResources 0.22
266 TestStoppedBinaryUpgrade/Setup 1.58
267 TestStoppedBinaryUpgrade/Upgrade 111.69
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
283 TestNetworkPlugins/group/false 5.86
288 TestStartStop/group/old-k8s-version/serial/FirstStart 121.89
289 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
290 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
291 TestStartStop/group/old-k8s-version/serial/Stop 12.08
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
293 TestStartStop/group/old-k8s-version/serial/SecondStart 662.19
295 TestStartStop/group/no-preload/serial/FirstStart 77.51
296 TestStartStop/group/no-preload/serial/DeployApp 8.38
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
298 TestStartStop/group/no-preload/serial/Stop 12.03
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/no-preload/serial/SecondStart 336.99
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
303 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
304 TestStartStop/group/no-preload/serial/Pause 3.46
306 TestStartStop/group/embed-certs/serial/FirstStart 84.73
307 TestStartStop/group/embed-certs/serial/DeployApp 9.34
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
309 TestStartStop/group/embed-certs/serial/Stop 12.1
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
311 TestStartStop/group/embed-certs/serial/SecondStart 338.99
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
315 TestStartStop/group/old-k8s-version/serial/Pause 3.24
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.31
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 340.28
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
326 TestStartStop/group/embed-certs/serial/Pause 3.23
328 TestStartStop/group/newest-cni/serial/FirstStart 45.04
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.21
331 TestStartStop/group/newest-cni/serial/Stop 1.32
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
333 TestStartStop/group/newest-cni/serial/SecondStart 32.48
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
337 TestStartStop/group/newest-cni/serial/Pause 3.07
338 TestNetworkPlugins/group/auto/Start 58.8
339 TestNetworkPlugins/group/auto/KubeletFlags 0.3
340 TestNetworkPlugins/group/auto/NetCatPod 8.35
341 TestNetworkPlugins/group/auto/DNS 0.3
342 TestNetworkPlugins/group/auto/Localhost 0.23
343 TestNetworkPlugins/group/auto/HairPin 0.22
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.19
348 TestNetworkPlugins/group/kindnet/Start 93.41
349 TestNetworkPlugins/group/calico/Start 80.28
350 TestNetworkPlugins/group/calico/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/calico/KubeletFlags 0.32
353 TestNetworkPlugins/group/calico/NetCatPod 10.28
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
355 TestNetworkPlugins/group/kindnet/NetCatPod 8.28
356 TestNetworkPlugins/group/calico/DNS 0.27
357 TestNetworkPlugins/group/calico/Localhost 0.19
358 TestNetworkPlugins/group/calico/HairPin 0.17
359 TestNetworkPlugins/group/kindnet/DNS 0.21
360 TestNetworkPlugins/group/kindnet/Localhost 0.22
361 TestNetworkPlugins/group/kindnet/HairPin 0.24
362 TestNetworkPlugins/group/custom-flannel/Start 67.15
363 TestNetworkPlugins/group/enable-default-cni/Start 90.5
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.32
366 TestNetworkPlugins/group/custom-flannel/DNS 0.2
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.42
371 TestNetworkPlugins/group/flannel/Start 64.61
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
375 TestNetworkPlugins/group/bridge/Start 88.74
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
378 TestNetworkPlugins/group/flannel/NetCatPod 12.27
379 TestNetworkPlugins/group/flannel/DNS 0.2
380 TestNetworkPlugins/group/flannel/Localhost 0.2
381 TestNetworkPlugins/group/flannel/HairPin 0.16
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
383 TestNetworkPlugins/group/bridge/NetCatPod 10.29
384 TestNetworkPlugins/group/bridge/DNS 0.17
385 TestNetworkPlugins/group/bridge/Localhost 0.16
386 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (28.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-630494 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-630494 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (28.239173733s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (28.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-630494
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-630494: exit status 85 (89.398907ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-630494 | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC |          |
	|         | -p download-only-630494        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:53:26
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:53:26.916218 1135093 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:53:26.916341 1135093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:53:26.916351 1135093 out.go:304] Setting ErrFile to fd 2...
	I0214 02:53:26.916357 1135093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:53:26.916586 1135093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	W0214 02:53:26.916701 1135093 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18166-1129740/.minikube/config/config.json: open /home/jenkins/minikube-integration/18166-1129740/.minikube/config/config.json: no such file or directory
	I0214 02:53:26.917136 1135093 out.go:298] Setting JSON to true
	I0214 02:53:26.917973 1135093 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20153,"bootTime":1707859054,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 02:53:26.918043 1135093 start.go:138] virtualization:  
	I0214 02:53:26.921098 1135093 out.go:97] [download-only-630494] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:53:26.922997 1135093 out.go:169] MINIKUBE_LOCATION=18166
	W0214 02:53:26.921333 1135093 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball: no such file or directory
	I0214 02:53:26.921373 1135093 notify.go:220] Checking for updates...
	I0214 02:53:26.926994 1135093 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:53:26.929108 1135093 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 02:53:26.931071 1135093 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 02:53:26.932985 1135093 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 02:53:26.936481 1135093 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 02:53:26.936754 1135093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:53:26.957267 1135093 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:53:26.957363 1135093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:53:27.029094 1135093 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 02:53:27.019531308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:53:27.029200 1135093 docker.go:295] overlay module found
	I0214 02:53:27.031244 1135093 out.go:97] Using the docker driver based on user configuration
	I0214 02:53:27.031274 1135093 start.go:298] selected driver: docker
	I0214 02:53:27.031281 1135093 start.go:902] validating driver "docker" against <nil>
	I0214 02:53:27.031448 1135093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:53:27.096142 1135093 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 02:53:27.087525976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:53:27.096310 1135093 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:53:27.096591 1135093 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 02:53:27.096741 1135093 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 02:53:27.098813 1135093 out.go:169] Using Docker driver with root privileges
	I0214 02:53:27.100552 1135093 cni.go:84] Creating CNI manager for ""
	I0214 02:53:27.100581 1135093 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:53:27.100593 1135093 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 02:53:27.100612 1135093 start_flags.go:321] config:
	{Name:download-only-630494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-630494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:53:27.102818 1135093 out.go:97] Starting control plane node download-only-630494 in cluster download-only-630494
	I0214 02:53:27.102856 1135093 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0214 02:53:27.104648 1135093 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:53:27.104684 1135093 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0214 02:53:27.104713 1135093 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:53:27.119665 1135093 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:53:27.120310 1135093 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:53:27.120436 1135093 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:53:27.173018 1135093 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0214 02:53:27.173045 1135093 cache.go:56] Caching tarball of preloaded images
	I0214 02:53:27.173687 1135093 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0214 02:53:27.175993 1135093 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0214 02:53:27.176019 1135093 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:53:27.304746 1135093 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0214 02:53:34.218708 1135093 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 02:53:47.860468 1135093 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:53:47.860572 1135093 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:53:48.961020 1135093 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0214 02:53:48.961381 1135093 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/download-only-630494/config.json ...
	I0214 02:53:48.961416 1135093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/download-only-630494/config.json: {Name:mk4a97205455347d21a13f65a82d230e4f69ae6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:53:48.961622 1135093 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0214 02:53:48.961828 1135093 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-630494"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-630494
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (22.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-950365 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-950365 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (22.205809475s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (22.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-950365
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-950365: exit status 85 (83.357163ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-630494 | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC |                     |
	|         | -p download-only-630494        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC | 14 Feb 24 02:53 UTC |
	| delete  | -p download-only-630494        | download-only-630494 | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC | 14 Feb 24 02:53 UTC |
	| start   | -o=json --download-only        | download-only-950365 | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC |                     |
	|         | -p download-only-950365        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:53:55
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:53:55.575779 1135251 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:53:55.575928 1135251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:53:55.575954 1135251 out.go:304] Setting ErrFile to fd 2...
	I0214 02:53:55.575973 1135251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:53:55.576244 1135251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 02:53:55.576701 1135251 out.go:298] Setting JSON to true
	I0214 02:53:55.577576 1135251 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20181,"bootTime":1707859054,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 02:53:55.577648 1135251 start.go:138] virtualization:  
	I0214 02:53:55.580572 1135251 out.go:97] [download-only-950365] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:53:55.582731 1135251 out.go:169] MINIKUBE_LOCATION=18166
	I0214 02:53:55.580777 1135251 notify.go:220] Checking for updates...
	I0214 02:53:55.584923 1135251 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:53:55.587208 1135251 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 02:53:55.589202 1135251 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 02:53:55.590985 1135251 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 02:53:55.594641 1135251 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 02:53:55.594901 1135251 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:53:55.615908 1135251 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:53:55.616014 1135251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:53:55.682550 1135251 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:53:55.672537937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:53:55.682652 1135251 docker.go:295] overlay module found
	I0214 02:53:55.684979 1135251 out.go:97] Using the docker driver based on user configuration
	I0214 02:53:55.685062 1135251 start.go:298] selected driver: docker
	I0214 02:53:55.685074 1135251 start.go:902] validating driver "docker" against <nil>
	I0214 02:53:55.685179 1135251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:53:55.746787 1135251 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:53:55.737844478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:53:55.746952 1135251 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:53:55.747237 1135251 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 02:53:55.747433 1135251 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 02:53:55.749907 1135251 out.go:169] Using Docker driver with root privileges
	I0214 02:53:55.752136 1135251 cni.go:84] Creating CNI manager for ""
	I0214 02:53:55.752163 1135251 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:53:55.752176 1135251 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 02:53:55.752187 1135251 start_flags.go:321] config:
	{Name:download-only-950365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-950365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:53:55.754248 1135251 out.go:97] Starting control plane node download-only-950365 in cluster download-only-950365
	I0214 02:53:55.754267 1135251 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0214 02:53:55.756943 1135251 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:53:55.756969 1135251 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:53:55.757012 1135251 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:53:55.771515 1135251 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:53:55.771648 1135251 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:53:55.771672 1135251 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 02:53:55.771677 1135251 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 02:53:55.771688 1135251 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 02:53:55.816645 1135251 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0214 02:53:55.816669 1135251 cache.go:56] Caching tarball of preloaded images
	I0214 02:53:55.816846 1135251 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:53:55.819003 1135251 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0214 02:53:55.819041 1135251 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:53:55.934509 1135251 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0214 02:54:10.805702 1135251 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:54:10.806504 1135251 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:54:11.721779 1135251 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0214 02:54:11.722151 1135251 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/download-only-950365/config.json ...
	I0214 02:54:11.722187 1135251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/download-only-950365/config.json: {Name:mk1b00d081c51c4d452e6b8bcf5356a066ad2034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:54:11.722839 1135251 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0214 02:54:11.723003 1135251 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-950365"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-950365
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (32.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-695284 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-695284 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (32.235672023s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (32.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-695284
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-695284: exit status 85 (79.95201ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-630494 | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC |                     |
	|         | -p download-only-630494           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC | 14 Feb 24 02:53 UTC |
	| delete  | -p download-only-630494           | download-only-630494 | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC | 14 Feb 24 02:53 UTC |
	| start   | -o=json --download-only           | download-only-950365 | jenkins | v1.32.0 | 14 Feb 24 02:53 UTC |                     |
	|         | -p download-only-950365           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| delete  | -p download-only-950365           | download-only-950365 | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC | 14 Feb 24 02:54 UTC |
	| start   | -o=json --download-only           | download-only-695284 | jenkins | v1.32.0 | 14 Feb 24 02:54 UTC |                     |
	|         | -p download-only-695284           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:54:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:54:18.233074 1135412 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:54:18.233276 1135412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:54:18.233303 1135412 out.go:304] Setting ErrFile to fd 2...
	I0214 02:54:18.233325 1135412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:54:18.233641 1135412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 02:54:18.234110 1135412 out.go:298] Setting JSON to true
	I0214 02:54:18.235042 1135412 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20204,"bootTime":1707859054,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 02:54:18.235176 1135412 start.go:138] virtualization:  
	I0214 02:54:18.238274 1135412 out.go:97] [download-only-695284] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:54:18.240792 1135412 out.go:169] MINIKUBE_LOCATION=18166
	I0214 02:54:18.238541 1135412 notify.go:220] Checking for updates...
	I0214 02:54:18.245207 1135412 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:54:18.247862 1135412 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 02:54:18.249885 1135412 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 02:54:18.252320 1135412 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 02:54:18.256730 1135412 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 02:54:18.257017 1135412 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:54:18.280910 1135412 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:54:18.281009 1135412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:54:18.352112 1135412 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:54:18.341727698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:54:18.352213 1135412 docker.go:295] overlay module found
	I0214 02:54:18.354374 1135412 out.go:97] Using the docker driver based on user configuration
	I0214 02:54:18.354401 1135412 start.go:298] selected driver: docker
	I0214 02:54:18.354408 1135412 start.go:902] validating driver "docker" against <nil>
	I0214 02:54:18.354519 1135412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:54:18.406893 1135412 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:54:18.398152391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:54:18.407086 1135412 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:54:18.407387 1135412 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 02:54:18.407564 1135412 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 02:54:18.410389 1135412 out.go:169] Using Docker driver with root privileges
	I0214 02:54:18.412735 1135412 cni.go:84] Creating CNI manager for ""
	I0214 02:54:18.412764 1135412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0214 02:54:18.412775 1135412 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 02:54:18.412788 1135412 start_flags.go:321] config:
	{Name:download-only-695284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-695284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:54:18.414896 1135412 out.go:97] Starting control plane node download-only-695284 in cluster download-only-695284
	I0214 02:54:18.414916 1135412 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0214 02:54:18.416839 1135412 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:54:18.416872 1135412 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0214 02:54:18.417048 1135412 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:54:18.431570 1135412 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:54:18.431711 1135412 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:54:18.431736 1135412 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 02:54:18.431748 1135412 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 02:54:18.431756 1135412 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 02:54:18.481999 1135412 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0214 02:54:18.482030 1135412 cache.go:56] Caching tarball of preloaded images
	I0214 02:54:18.482192 1135412 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0214 02:54:18.484660 1135412 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0214 02:54:18.484688 1135412 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:54:18.601367 1135412 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0214 02:54:43.276349 1135412 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:54:43.277075 1135412 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0214 02:54:44.148550 1135412 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0214 02:54:44.148909 1135412 profile.go:148] Saving config to /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/download-only-695284/config.json ...
	I0214 02:54:44.148946 1135412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/download-only-695284/config.json: {Name:mkf5d08cb9337fd795e7743d1629b3998b4bbdb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:54:44.149751 1135412 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0214 02:54:44.150571 1135412 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18166-1129740/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-695284"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-695284
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-348755 --alsologtostderr --binary-mirror http://127.0.0.1:39189 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-348755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-348755
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-107916
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-107916: exit status 85 (82.18482ms)

                                                
                                                
-- stdout --
	* Profile "addons-107916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-107916"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-107916
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-107916: exit status 85 (87.198579ms)

                                                
                                                
-- stdout --
	* Profile "addons-107916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-107916"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (137.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-107916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-107916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m17.220610775s)
--- PASS: TestAddons/Setup (137.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 54.715697ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vq7pw" [76ecca74-b904-428a-957c-e497f46f916d] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005925735s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4vspg" [ed6185ac-833d-49bc-9dbd-44ca26c256ef] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005346673s
addons_test.go:340: (dbg) Run:  kubectl --context addons-107916 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-107916 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-107916 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.451933436s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 ip
2024/02/14 02:57:26 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fs22b" [38fd14c8-107a-4cb1-984f-92d9e25eea1d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004126058s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-107916
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-107916: (5.985485357s)
--- PASS: TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 9.164539ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-xgpcx" [a75b205a-055e-4b2e-82c2-53e542d18ae2] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004886565s
addons_test.go:415: (dbg) Run:  kubectl --context addons-107916 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-107916 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-107916 --alsologtostderr -v=1: (2.010251407s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-59lmx" [655f6b3f-c328-490e-96cd-686da1d5b29c] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-59lmx" [655f6b3f-c328-490e-96cd-686da1d5b29c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-59lmx" [655f6b3f-c328-490e-96cd-686da1d5b29c] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003750844s
--- PASS: TestAddons/parallel/Headlamp (11.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-hdd6p" [9c9307d9-1a1f-4a26-b161-834ca1d1b41d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007434605s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-107916
--- PASS: TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-107916 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-107916 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-107916 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [77317777-434a-41dd-a6c6-72c69b6c32ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [77317777-434a-41dd-a6c6-72c69b6c32ce] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [77317777-434a-41dd-a6c6-72c69b6c32ce] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003439912s
addons_test.go:891: (dbg) Run:  kubectl --context addons-107916 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 ssh "cat /opt/local-path-provisioner/pvc-2358e9d1-a1ee-49c0-8dab-57be5f72d3ad_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-107916 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-107916 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-107916 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qp5mc" [e83ab22d-76cc-418f-9a1e-704888f17ca0] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004941847s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-107916
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-mv4gt" [327e4f1f-30f8-4171-af93-5f35e4befbdd] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004049567s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-107916 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-107916 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-107916
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-107916: (12.006890927s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-107916
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-107916
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-107916
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (40.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-043588 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-043588 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (37.406875617s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-043588 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-043588 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-043588 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-043588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-043588
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-043588: (1.990929205s)
--- PASS: TestCertOptions (40.14s)

                                                
                                    
x
+
TestCertExpiration (230.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-234729 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-234729 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.756766909s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-234729 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-234729 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.635983206s)
helpers_test.go:175: Cleaning up "cert-expiration-234729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-234729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-234729: (2.17339164s)
--- PASS: TestCertExpiration (230.57s)

                                                
                                    
x
+
TestForceSystemdFlag (45.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-892475 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-892475 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.96812872s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-892475 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-892475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-892475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-892475: (2.278121032s)
--- PASS: TestForceSystemdFlag (45.74s)

                                                
                                    
x
+
TestForceSystemdEnv (42.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-051392 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-051392 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.669883703s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-051392 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-051392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-051392
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-051392: (2.254210766s)
--- PASS: TestForceSystemdEnv (42.36s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.35s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-506806 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-506806 --driver=docker  --container-runtime=containerd: (31.509152713s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-506806"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-506806": (1.304869996s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-2X8anCiCD5Zj/agent.1152113" SSH_AGENT_PID="1152114" DOCKER_HOST=ssh://docker@127.0.0.1:34037 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-2X8anCiCD5Zj/agent.1152113" SSH_AGENT_PID="1152114" DOCKER_HOST=ssh://docker@127.0.0.1:34037 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-2X8anCiCD5Zj/agent.1152113" SSH_AGENT_PID="1152114" DOCKER_HOST=ssh://docker@127.0.0.1:34037 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.231569334s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-2X8anCiCD5Zj/agent.1152113" SSH_AGENT_PID="1152114" DOCKER_HOST=ssh://docker@127.0.0.1:34037 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-506806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-506806
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-506806: (1.986069351s)
--- PASS: TestDockerEnvContainerd (47.35s)

                                                
                                    
x
+
TestErrorSpam/setup (30.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-013257 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-013257 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-013257 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-013257 --driver=docker  --container-runtime=containerd: (30.416361734s)
--- PASS: TestErrorSpam/setup (30.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 stop: (1.277518609s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013257 --log_dir /tmp/nospam-013257 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18166-1129740/.minikube/files/etc/test/nested/copy/1135087/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-991896 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-991896 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (59.017982383s)
--- PASS: TestFunctional/serial/StartWithProxy (59.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-991896 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-991896 --alsologtostderr -v=8: (5.586081567s)
functional_test.go:659: soft start took 5.588408175s for "functional-991896" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-991896 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 cache add registry.k8s.io/pause:3.1: (1.437769437s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 cache add registry.k8s.io/pause:3.3: (1.397605776s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 cache add registry.k8s.io/pause:latest: (1.202618028s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-991896 /tmp/TestFunctionalserialCacheCmdcacheadd_local298350339/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cache add minikube-local-cache-test:functional-991896
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cache delete minikube-local-cache-test:functional-991896
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-991896
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (315.238486ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 cache reload: (1.196835245s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 kubectl -- --context functional-991896 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-991896 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 logs: (1.523671472s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 logs --file /tmp/TestFunctionalserialLogsFileCmd2659774811/001/logs.txt
E0214 03:02:20.137991 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 logs --file /tmp/TestFunctionalserialLogsFileCmd2659774811/001/logs.txt: (1.69547543s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-991896 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-991896
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-991896: exit status 115 (439.322451ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30571 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-991896 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 config get cpus: exit status 14 (95.559105ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 config get cpus: exit status 14 (96.591206ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-991896 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-991896 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1166156: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-991896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-991896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (212.111537ms)

                                                
                                                
-- stdout --
	* [functional-991896] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:02:58.135850 1165829 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:02:58.136078 1165829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:02:58.136092 1165829 out.go:304] Setting ErrFile to fd 2...
	I0214 03:02:58.136100 1165829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:02:58.137013 1165829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:02:58.137534 1165829 out.go:298] Setting JSON to false
	I0214 03:02:58.138675 1165829 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20724,"bootTime":1707859054,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 03:02:58.138758 1165829 start.go:138] virtualization:  
	I0214 03:02:58.142763 1165829 out.go:177] * [functional-991896] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 03:02:58.145104 1165829 out.go:177]   - MINIKUBE_LOCATION=18166
	I0214 03:02:58.147407 1165829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:02:58.145288 1165829 notify.go:220] Checking for updates...
	I0214 03:02:58.152875 1165829 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:02:58.156406 1165829 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 03:02:58.158610 1165829 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:02:58.161336 1165829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:02:58.164506 1165829 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:02:58.165081 1165829 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:02:58.193923 1165829 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:02:58.194035 1165829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:02:58.275186 1165829 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-14 03:02:58.265136772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:02:58.275303 1165829 docker.go:295] overlay module found
	I0214 03:02:58.277504 1165829 out.go:177] * Using the docker driver based on existing profile
	I0214 03:02:58.279906 1165829 start.go:298] selected driver: docker
	I0214 03:02:58.279926 1165829 start.go:902] validating driver "docker" against &{Name:functional-991896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:02:58.280050 1165829 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:02:58.283008 1165829 out.go:177] 
	W0214 03:02:58.285157 1165829 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0214 03:02:58.287031 1165829 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-991896 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-991896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-991896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (205.95629ms)

                                                
                                                
-- stdout --
	* [functional-991896] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:02:57.941267 1165787 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:02:57.941528 1165787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:02:57.941559 1165787 out.go:304] Setting ErrFile to fd 2...
	I0214 03:02:57.941584 1165787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:02:57.942432 1165787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:02:57.942887 1165787 out.go:298] Setting JSON to false
	I0214 03:02:57.943968 1165787 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20724,"bootTime":1707859054,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 03:02:57.944073 1165787 start.go:138] virtualization:  
	I0214 03:02:57.946939 1165787 out.go:177] * [functional-991896] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0214 03:02:57.949765 1165787 out.go:177]   - MINIKUBE_LOCATION=18166
	I0214 03:02:57.949872 1165787 notify.go:220] Checking for updates...
	I0214 03:02:57.951947 1165787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:02:57.954327 1165787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:02:57.956792 1165787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 03:02:57.958773 1165787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:02:57.960784 1165787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:02:57.963063 1165787 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:02:57.963724 1165787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:02:57.989157 1165787 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:02:57.989314 1165787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:02:58.064426 1165787 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-14 03:02:58.054709468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:02:58.064555 1165787 docker.go:295] overlay module found
	I0214 03:02:58.067298 1165787 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0214 03:02:58.068954 1165787 start.go:298] selected driver: docker
	I0214 03:02:58.068978 1165787 start.go:902] validating driver "docker" against &{Name:functional-991896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-991896 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:02:58.069094 1165787 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:02:58.071697 1165787 out.go:177] 
	W0214 03:02:58.073854 1165787 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0214 03:02:58.075982 1165787 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-991896 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-991896 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6fl44" [fe48b305-cc84-4551-afb8-bbc5f449d761] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6fl44" [fe48b305-cc84-4551-afb8-bbc5f449d761] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006688008s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30727
functional_test.go:1671: http://192.168.49.2:30727: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-6fl44

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30727
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3c8003a7-b2ec-4b9f-976e-b4eb23488340] Running
E0214 03:02:30.378255 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004957663s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-991896 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-991896 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-991896 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-991896 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [870d6ea1-2250-4a05-a076-ccb47c3e9a9f] Pending
helpers_test.go:344: "sp-pod" [870d6ea1-2250-4a05-a076-ccb47c3e9a9f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [870d6ea1-2250-4a05-a076-ccb47c3e9a9f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004716926s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-991896 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-991896 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-991896 delete -f testdata/storage-provisioner/pod.yaml: (1.433857304s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-991896 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7f974fd4-a368-479a-a4c1-4de3456a4dd9] Pending
helpers_test.go:344: "sp-pod" [7f974fd4-a368-479a-a4c1-4de3456a4dd9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7f974fd4-a368-479a-a4c1-4de3456a4dd9] Running
E0214 03:02:50.859032 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005377497s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-991896 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh -n functional-991896 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cp functional-991896:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1989978253/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh -n functional-991896 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh -n functional-991896 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1135087/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo cat /etc/test/nested/copy/1135087/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1135087.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo cat /etc/ssl/certs/1135087.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1135087.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo cat /usr/share/ca-certificates/1135087.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11350872.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo cat /etc/ssl/certs/11350872.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11350872.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo cat /usr/share/ca-certificates/11350872.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-991896 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 ssh "sudo systemctl is-active docker": exit status 1 (365.108934ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 ssh "sudo systemctl is-active crio": exit status 1 (327.075027ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-991896 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-991896 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-991896 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1163768: os: process already finished
helpers_test.go:502: unable to terminate pid 1163613: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-991896 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-991896 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-991896 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6a937593-13f3-4478-80b5-f784928056e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6a937593-13f3-4478-80b5-f784928056e0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004167324s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-991896 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.217.141 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-991896 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-991896 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-991896 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-t7jw4" [1f49a58d-d59d-4f55-a5e8-0ae683f4cfee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-t7jw4" [1f49a58d-d59d-4f55-a5e8-0ae683f4cfee] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003655393s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 service list -o json
functional_test.go:1490: Took "642.510317ms" to run "out/minikube-linux-arm64 -p functional-991896 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "389.80274ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "115.780844ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30440
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "409.419102ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "91.470814ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdany-port3848071721/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707879775699315000" to /tmp/TestFunctionalparallelMountCmdany-port3848071721/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707879775699315000" to /tmp/TestFunctionalparallelMountCmdany-port3848071721/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707879775699315000" to /tmp/TestFunctionalparallelMountCmdany-port3848071721/001/test-1707879775699315000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 14 03:02 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 14 03:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 14 03:02 test-1707879775699315000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh cat /mount-9p/test-1707879775699315000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-991896 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5d199d4d-fa89-422f-a149-8ffad35e5ef1] Pending
helpers_test.go:344: "busybox-mount" [5d199d4d-fa89-422f-a149-8ffad35e5ef1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5d199d4d-fa89-422f-a149-8ffad35e5ef1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5d199d4d-fa89-422f-a149-8ffad35e5ef1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00409308s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-991896 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdany-port3848071721/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30440
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdspecific-port4182546299/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (467.513021ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdspecific-port4182546299/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 ssh "sudo umount -f /mount-9p": exit status 1 (367.226921ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-991896 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdspecific-port4182546299/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1147226865/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1147226865/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1147226865/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T" /mount1: exit status 1 (1.027068162s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-991896 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1147226865/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1147226865/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-991896 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1147226865/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 version -o=json --components: (1.396658598s)
--- PASS: TestFunctional/parallel/Version/components (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-991896 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-991896
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-991896 image ls --format short --alsologtostderr:
I0214 03:03:25.577920 1168380 out.go:291] Setting OutFile to fd 1 ...
I0214 03:03:25.578077 1168380 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:25.578088 1168380 out.go:304] Setting ErrFile to fd 2...
I0214 03:03:25.578094 1168380 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:25.578557 1168380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
I0214 03:03:25.579768 1168380 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:25.580176 1168380 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:25.580997 1168380 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
I0214 03:03:25.603822 1168380 ssh_runner.go:195] Run: systemctl --version
I0214 03:03:25.603947 1168380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
I0214 03:03:25.635738 1168380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
I0214 03:03:25.738399 1168380 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-991896 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | latest             | sha256:11deb5 | 67.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| docker.io/library/nginx                     | alpine             | sha256:d315ef | 17.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| docker.io/library/minikube-local-cache-test | functional-991896  | sha256:0c002a | 1.01kB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-991896 image ls --format table --alsologtostderr:
I0214 03:03:25.873122 1168437 out.go:291] Setting OutFile to fd 1 ...
I0214 03:03:25.873463 1168437 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:25.873492 1168437 out.go:304] Setting ErrFile to fd 2...
I0214 03:03:25.873513 1168437 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:25.873844 1168437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
I0214 03:03:25.874580 1168437 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:25.874822 1168437 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:25.875540 1168437 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
I0214 03:03:25.903980 1168437 ssh_runner.go:195] Run: systemctl --version
I0214 03:03:25.904034 1168437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
I0214 03:03:25.930455 1168437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
I0214 03:03:26.024323 1168437 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-991896 image ls --format json --alsologtostderr:
[{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k
8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524
137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:0c002a3b0db0ee7befa1424554d1717ac2f39a399f591e383f747e53ce0f848f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-991896"],"size":"1006"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoD
igests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:d315ef79be32cd8ae44f153a41c42e5e407c04f959074ddb8acc2c26649e2676","repoDigests":["docker.io/library/nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17612166"},{"id":"sha256:11deb55301007d6bf1db2ce20cb5d12e447541969990af4a03e2af8141ebdbed","repoDigests":["docker.io/library/nginx@sha256:ac2b22fdbe4c13e6f3be8c5fe
9a19677aa7614acaa1cbf5d354af723873cbd28"],"repoTags":["docker.io/library/nginx:latest"],"size":"67249293"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-991896 image ls --format json --alsologtostderr:
I0214 03:03:25.587686 1168381 out.go:291] Setting OutFile to fd 1 ...
I0214 03:03:25.587946 1168381 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:25.587978 1168381 out.go:304] Setting ErrFile to fd 2...
I0214 03:03:25.588000 1168381 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:25.588276 1168381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
I0214 03:03:25.588977 1168381 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:25.589189 1168381 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:25.589722 1168381 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
I0214 03:03:25.612248 1168381 ssh_runner.go:195] Run: systemctl --version
I0214 03:03:25.612300 1168381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
I0214 03:03:25.633538 1168381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
I0214 03:03:25.728080 1168381 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-991896 image ls --format yaml --alsologtostderr:
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:11deb55301007d6bf1db2ce20cb5d12e447541969990af4a03e2af8141ebdbed
repoDigests:
- docker.io/library/nginx@sha256:ac2b22fdbe4c13e6f3be8c5fe9a19677aa7614acaa1cbf5d354af723873cbd28
repoTags:
- docker.io/library/nginx:latest
size: "67249293"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:0c002a3b0db0ee7befa1424554d1717ac2f39a399f591e383f747e53ce0f848f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-991896
size: "1006"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:d315ef79be32cd8ae44f153a41c42e5e407c04f959074ddb8acc2c26649e2676
repoDigests:
- docker.io/library/nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076
repoTags:
- docker.io/library/nginx:alpine
size: "17612166"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-991896 image ls --format yaml --alsologtostderr:
I0214 03:03:26.158925 1168515 out.go:291] Setting OutFile to fd 1 ...
I0214 03:03:26.159086 1168515 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:26.159098 1168515 out.go:304] Setting ErrFile to fd 2...
I0214 03:03:26.159105 1168515 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:26.159808 1168515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
I0214 03:03:26.160829 1168515 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:26.160968 1168515 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:26.161789 1168515 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
I0214 03:03:26.187254 1168515 ssh_runner.go:195] Run: systemctl --version
I0214 03:03:26.187310 1168515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
I0214 03:03:26.211345 1168515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
I0214 03:03:26.299937 1168515 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-991896 ssh pgrep buildkitd: exit status 1 (361.527623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image build -t localhost/my-image:functional-991896 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-991896 image build -t localhost/my-image:functional-991896 testdata/build --alsologtostderr: (2.097727391s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-991896 image build -t localhost/my-image:functional-991896 testdata/build --alsologtostderr:
I0214 03:03:26.258947 1168527 out.go:291] Setting OutFile to fd 1 ...
I0214 03:03:26.265063 1168527 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:26.265085 1168527 out.go:304] Setting ErrFile to fd 2...
I0214 03:03:26.265093 1168527 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:03:26.265419 1168527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
I0214 03:03:26.266293 1168527 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:26.268139 1168527 config.go:182] Loaded profile config "functional-991896": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0214 03:03:26.268712 1168527 cli_runner.go:164] Run: docker container inspect functional-991896 --format={{.State.Status}}
I0214 03:03:26.286706 1168527 ssh_runner.go:195] Run: systemctl --version
I0214 03:03:26.286757 1168527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-991896
I0214 03:03:26.308160 1168527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/functional-991896/id_rsa Username:docker}
I0214 03:03:26.420075 1168527 build_images.go:151] Building image from path: /tmp/build.2513939770.tar
I0214 03:03:26.420153 1168527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0214 03:03:26.429523 1168527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2513939770.tar
I0214 03:03:26.433271 1168527 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2513939770.tar: stat -c "%s %y" /var/lib/minikube/build/build.2513939770.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2513939770.tar': No such file or directory
I0214 03:03:26.433315 1168527 ssh_runner.go:362] scp /tmp/build.2513939770.tar --> /var/lib/minikube/build/build.2513939770.tar (3072 bytes)
I0214 03:03:26.458715 1168527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2513939770
I0214 03:03:26.468160 1168527 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2513939770 -xf /var/lib/minikube/build/build.2513939770.tar
I0214 03:03:26.478400 1168527 containerd.go:379] Building image: /var/lib/minikube/build/build.2513939770
I0214 03:03:26.478483 1168527 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2513939770 --local dockerfile=/var/lib/minikube/build/build.2513939770 --output type=image,name=localhost/my-image:functional-991896
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:2982d4c80b83a2a3e131c844fd5dfd3a5da36e6ce1abce5947fa255eadb9f0bd
#8 exporting manifest sha256:2982d4c80b83a2a3e131c844fd5dfd3a5da36e6ce1abce5947fa255eadb9f0bd 0.0s done
#8 exporting config sha256:001a961221de8628d2bf4faf1dabd8760430ee1601f242821417fa99a94a5efd 0.0s done
#8 naming to localhost/my-image:functional-991896 done
#8 DONE 0.1s
I0214 03:03:28.247993 1168527 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2513939770 --local dockerfile=/var/lib/minikube/build/build.2513939770 --output type=image,name=localhost/my-image:functional-991896: (1.769480244s)
I0214 03:03:28.248086 1168527 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2513939770
I0214 03:03:28.257810 1168527 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2513939770.tar
I0214 03:03:28.266412 1168527 build_images.go:207] Built localhost/my-image:functional-991896 from /tmp/build.2513939770.tar
I0214 03:03:28.266442 1168527 build_images.go:123] succeeded building to: functional-991896
I0214 03:03:28.266448 1168527 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/02/14 03:03:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.74545983s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-991896
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image rm gcr.io/google-containers/addon-resizer:functional-991896 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-991896
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-991896 image save --daemon gcr.io/google-containers/addon-resizer:functional-991896 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-991896
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-991896
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-991896
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-991896
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (105.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-089373 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0214 03:03:31.819777 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:04:53.740691 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-089373 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m45.886525361s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (105.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-089373 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-089373 addons enable ingress --alsologtostderr -v=5: (8.977500757s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.98s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-089373 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-384956 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0214 03:07:09.895373 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:07:27.953704 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:27.958986 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:27.969224 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:27.989525 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:28.029848 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:28.110190 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:28.270634 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:28.591187 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:29.232109 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:30.512886 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:33.073822 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:07:37.580942 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:07:38.194796 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-384956 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m18.630913299s)
--- PASS: TestJSONOutput/start/Command (78.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-384956 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-384956 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-384956 --output=json --user=testUser
E0214 03:07:48.434974 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-384956 --output=json --user=testUser: (5.765529658s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-800697 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-800697 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.208858ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"489fdaaa-628e-49f2-963b-0959e65a27a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-800697] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b3be226-2618-4eca-a23c-f8c64ca0aff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18166"}}
	{"specversion":"1.0","id":"dfd1a490-74e3-42f1-b216-37c286d55d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"efa92a9d-f6bf-4b97-ad3b-df1ebd4afe7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig"}}
	{"specversion":"1.0","id":"29e756ee-2cec-4ed8-a1f1-5e0a116dbb6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube"}}
	{"specversion":"1.0","id":"4408b6f2-efe7-453a-9e84-55fb738a9cd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dcddffdd-8309-4545-b35b-c40ffc180404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a8cbe02f-de22-4bbf-894f-a623f0dcff76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-800697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-800697
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (52.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-096713 --network=
E0214 03:08:08.915384 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-096713 --network=: (50.506803916s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-096713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-096713
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-096713: (2.145041585s)
--- PASS: TestKicCustomNetwork/create_custom_network (52.67s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-713798 --network=bridge
E0214 03:08:49.876536 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-713798 --network=bridge: (31.75383597s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-713798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-713798
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-713798: (1.924320648s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.70s)

                                                
                                    
x
+
TestKicExistingNetwork (33.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-066652 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-066652 --network=existing-network: (31.048400202s)
helpers_test.go:175: Cleaning up "existing-network-066652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-066652
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-066652: (1.991725639s)
--- PASS: TestKicExistingNetwork (33.18s)

                                                
                                    
x
+
TestKicCustomSubnet (36.57s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-766065 --subnet=192.168.60.0/24
E0214 03:10:11.797479 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:10:26.833766 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:26.839007 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:26.849268 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:26.869509 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:26.909753 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:26.990048 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:27.150384 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:27.470634 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-766065 --subnet=192.168.60.0/24: (34.395459278s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-766065 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-766065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-766065
E0214 03:10:28.111299 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:29.391611 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-766065: (2.154182953s)
--- PASS: TestKicCustomSubnet (36.57s)

                                                
                                    
x
+
TestKicStaticIP (38.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-857749 --static-ip=192.168.200.200
E0214 03:10:31.952670 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:37.073157 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:10:47.313416 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-857749 --static-ip=192.168.200.200: (35.8224634s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-857749 ip
helpers_test.go:175: Cleaning up "static-ip-857749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-857749
E0214 03:11:07.793634 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-857749: (2.117922562s)
--- PASS: TestKicStaticIP (38.11s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-073194 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-073194 --driver=docker  --container-runtime=containerd: (31.276844219s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-075548 --driver=docker  --container-runtime=containerd
E0214 03:11:48.754487 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:12:09.895849 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-075548 --driver=docker  --container-runtime=containerd: (32.713531972s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-073194
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-075548
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-075548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-075548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-075548: (2.036529304s)
helpers_test.go:175: Cleaning up "first-073194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-073194
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-073194: (1.992295042s)
--- PASS: TestMinikubeProfile (69.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-052953 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-052953 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.287467696s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-052953 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-055177 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0214 03:12:27.953757 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-055177 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.933203937s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-055177 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-052953 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-052953 --alsologtostderr -v=5: (1.624214581s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-055177 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-055177
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-055177: (1.196525968s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.5s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-055177
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-055177: (6.495762731s)
--- PASS: TestMountStart/serial/RestartStopped (7.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-055177 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-967935 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0214 03:12:55.637755 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:13:10.675225 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-967935 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.771953757s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-967935 -- rollout status deployment/busybox: (3.926791999s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-25492 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-d5k2p -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-25492 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-d5k2p -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-25492 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-d5k2p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-25492 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-25492 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-d5k2p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-967935 -- exec busybox-5b5d89c9d6-d5k2p -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-967935 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-967935 -v 3 --alsologtostderr: (30.863095429s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-967935 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp testdata/cp-test.txt multinode-967935:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile477847699/001/cp-test_multinode-967935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935:/home/docker/cp-test.txt multinode-967935-m02:/home/docker/cp-test_multinode-967935_multinode-967935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m02 "sudo cat /home/docker/cp-test_multinode-967935_multinode-967935-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935:/home/docker/cp-test.txt multinode-967935-m03:/home/docker/cp-test_multinode-967935_multinode-967935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m03 "sudo cat /home/docker/cp-test_multinode-967935_multinode-967935-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp testdata/cp-test.txt multinode-967935-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile477847699/001/cp-test_multinode-967935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935-m02:/home/docker/cp-test.txt multinode-967935:/home/docker/cp-test_multinode-967935-m02_multinode-967935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935 "sudo cat /home/docker/cp-test_multinode-967935-m02_multinode-967935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935-m02:/home/docker/cp-test.txt multinode-967935-m03:/home/docker/cp-test_multinode-967935-m02_multinode-967935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m03 "sudo cat /home/docker/cp-test_multinode-967935-m02_multinode-967935-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp testdata/cp-test.txt multinode-967935-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile477847699/001/cp-test_multinode-967935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935-m03:/home/docker/cp-test.txt multinode-967935:/home/docker/cp-test_multinode-967935-m03_multinode-967935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935 "sudo cat /home/docker/cp-test_multinode-967935-m03_multinode-967935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 cp multinode-967935-m03:/home/docker/cp-test.txt multinode-967935-m02:/home/docker/cp-test_multinode-967935-m03_multinode-967935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 ssh -n multinode-967935-m02 "sudo cat /home/docker/cp-test_multinode-967935-m03_multinode-967935-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-967935 node stop m03: (1.240620014s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-967935 status: exit status 7 (552.460604ms)

                                                
                                                
-- stdout --
	multinode-967935
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-967935-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-967935-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-967935 status --alsologtostderr: exit status 7 (540.398681ms)

                                                
                                                
-- stdout --
	multinode-967935
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-967935-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-967935-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:14:56.126736 1215531 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:14:56.126923 1215531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:14:56.126929 1215531 out.go:304] Setting ErrFile to fd 2...
	I0214 03:14:56.126935 1215531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:14:56.127182 1215531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:14:56.127366 1215531 out.go:298] Setting JSON to false
	I0214 03:14:56.127423 1215531 mustload.go:65] Loading cluster: multinode-967935
	I0214 03:14:56.127522 1215531 notify.go:220] Checking for updates...
	I0214 03:14:56.128492 1215531 config.go:182] Loaded profile config "multinode-967935": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:14:56.128511 1215531 status.go:255] checking status of multinode-967935 ...
	I0214 03:14:56.129013 1215531 cli_runner.go:164] Run: docker container inspect multinode-967935 --format={{.State.Status}}
	I0214 03:14:56.146951 1215531 status.go:330] multinode-967935 host status = "Running" (err=<nil>)
	I0214 03:14:56.146979 1215531 host.go:66] Checking if "multinode-967935" exists ...
	I0214 03:14:56.147330 1215531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-967935
	I0214 03:14:56.169104 1215531 host.go:66] Checking if "multinode-967935" exists ...
	I0214 03:14:56.169420 1215531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:14:56.169477 1215531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-967935
	I0214 03:14:56.192161 1215531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34112 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/multinode-967935/id_rsa Username:docker}
	I0214 03:14:56.284600 1215531 ssh_runner.go:195] Run: systemctl --version
	I0214 03:14:56.288903 1215531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:14:56.300713 1215531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:14:56.372967 1215531 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-14 03:14:56.362715649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:14:56.373683 1215531 kubeconfig.go:92] found "multinode-967935" server: "https://192.168.58.2:8443"
	I0214 03:14:56.373711 1215531 api_server.go:166] Checking apiserver status ...
	I0214 03:14:56.373757 1215531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 03:14:56.385101 1215531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1307/cgroup
	I0214 03:14:56.395058 1215531 api_server.go:182] apiserver freezer: "11:freezer:/docker/1b2dae6421e1f3a1e1c0892a94f36fbe1b700d83f2ea9fafeacd8c2ac2de6a96/kubepods/burstable/pod0fdc6b0250d552112b439dcbb850b917/57721a4fe9f58d22b86b1bd99da927950eb05196806a95e0292951e4c7217f4c"
	I0214 03:14:56.395135 1215531 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1b2dae6421e1f3a1e1c0892a94f36fbe1b700d83f2ea9fafeacd8c2ac2de6a96/kubepods/burstable/pod0fdc6b0250d552112b439dcbb850b917/57721a4fe9f58d22b86b1bd99da927950eb05196806a95e0292951e4c7217f4c/freezer.state
	I0214 03:14:56.403879 1215531 api_server.go:204] freezer state: "THAWED"
	I0214 03:14:56.403921 1215531 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0214 03:14:56.413122 1215531 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0214 03:14:56.413150 1215531 status.go:421] multinode-967935 apiserver status = Running (err=<nil>)
	I0214 03:14:56.413161 1215531 status.go:257] multinode-967935 status: &{Name:multinode-967935 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 03:14:56.413196 1215531 status.go:255] checking status of multinode-967935-m02 ...
	I0214 03:14:56.413500 1215531 cli_runner.go:164] Run: docker container inspect multinode-967935-m02 --format={{.State.Status}}
	I0214 03:14:56.430420 1215531 status.go:330] multinode-967935-m02 host status = "Running" (err=<nil>)
	I0214 03:14:56.430442 1215531 host.go:66] Checking if "multinode-967935-m02" exists ...
	I0214 03:14:56.430744 1215531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-967935-m02
	I0214 03:14:56.450036 1215531 host.go:66] Checking if "multinode-967935-m02" exists ...
	I0214 03:14:56.450414 1215531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:14:56.450462 1215531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-967935-m02
	I0214 03:14:56.466880 1215531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34117 SSHKeyPath:/home/jenkins/minikube-integration/18166-1129740/.minikube/machines/multinode-967935-m02/id_rsa Username:docker}
	I0214 03:14:56.556653 1215531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:14:56.575658 1215531 status.go:257] multinode-967935-m02 status: &{Name:multinode-967935-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0214 03:14:56.575698 1215531 status.go:255] checking status of multinode-967935-m03 ...
	I0214 03:14:56.576079 1215531 cli_runner.go:164] Run: docker container inspect multinode-967935-m03 --format={{.State.Status}}
	I0214 03:14:56.593410 1215531 status.go:330] multinode-967935-m03 host status = "Stopped" (err=<nil>)
	I0214 03:14:56.593429 1215531 status.go:343] host is not running, skipping remaining checks
	I0214 03:14:56.593437 1215531 status.go:257] multinode-967935-m03 status: &{Name:multinode-967935-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-967935 node start m03 --alsologtostderr: (11.165569344s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-967935
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-967935
E0214 03:15:26.832946 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-967935: (24.931476132s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-967935 --wait=true -v=8 --alsologtostderr
E0214 03:15:54.515588 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-967935 --wait=true -v=8 --alsologtostderr: (1m32.296794055s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-967935
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 node delete m03
E0214 03:17:09.896196 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-967935 node delete m03: (4.32691435s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 stop
E0214 03:17:27.953557 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-967935 stop: (23.768738505s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-967935 status: exit status 7 (90.116141ms)

                                                
                                                
-- stdout --
	multinode-967935
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-967935-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-967935 status --alsologtostderr: exit status 7 (91.765645ms)

                                                
                                                
-- stdout --
	multinode-967935
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-967935-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:17:34.888240 1224378 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:17:34.888414 1224378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:17:34.888433 1224378 out.go:304] Setting ErrFile to fd 2...
	I0214 03:17:34.888440 1224378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:17:34.888921 1224378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:17:34.889171 1224378 out.go:298] Setting JSON to false
	I0214 03:17:34.889199 1224378 mustload.go:65] Loading cluster: multinode-967935
	I0214 03:17:34.889917 1224378 config.go:182] Loaded profile config "multinode-967935": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:17:34.889935 1224378 status.go:255] checking status of multinode-967935 ...
	I0214 03:17:34.890654 1224378 cli_runner.go:164] Run: docker container inspect multinode-967935 --format={{.State.Status}}
	I0214 03:17:34.891088 1224378 notify.go:220] Checking for updates...
	I0214 03:17:34.908395 1224378 status.go:330] multinode-967935 host status = "Stopped" (err=<nil>)
	I0214 03:17:34.908434 1224378 status.go:343] host is not running, skipping remaining checks
	I0214 03:17:34.908441 1224378 status.go:257] multinode-967935 status: &{Name:multinode-967935 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 03:17:34.908465 1224378 status.go:255] checking status of multinode-967935-m02 ...
	I0214 03:17:34.908780 1224378 cli_runner.go:164] Run: docker container inspect multinode-967935-m02 --format={{.State.Status}}
	I0214 03:17:34.924378 1224378 status.go:330] multinode-967935-m02 host status = "Stopped" (err=<nil>)
	I0214 03:17:34.924402 1224378 status.go:343] host is not running, skipping remaining checks
	I0214 03:17:34.924411 1224378 status.go:257] multinode-967935-m02 status: &{Name:multinode-967935-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-967935 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0214 03:18:32.941504 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-967935 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m25.335779056s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-967935 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-967935
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-967935-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-967935-m02 --driver=docker  --container-runtime=containerd: exit status 14 (107.380876ms)

                                                
                                                
-- stdout --
	* [multinode-967935-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-967935-m02' is duplicated with machine name 'multinode-967935-m02' in profile 'multinode-967935'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-967935-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-967935-m03 --driver=docker  --container-runtime=containerd: (30.680223632s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-967935
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-967935: exit status 80 (346.481958ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-967935
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-967935-m03 already exists in multinode-967935-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-967935-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-967935-m03: (1.998393442s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.23s)

                                                
                                    
x
+
TestPreload (149.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-491665 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0214 03:20:26.833084 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-491665 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m19.525461125s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-491665 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-491665 image pull gcr.io/k8s-minikube/busybox: (1.350100906s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-491665
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-491665: (11.926258384s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-491665 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-491665 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.438069261s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-491665 image list
helpers_test.go:175: Cleaning up "test-preload-491665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-491665
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-491665: (2.27883542s)
--- PASS: TestPreload (149.76s)

                                                
                                    
x
+
TestScheduledStopUnix (106.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-181239 --memory=2048 --driver=docker  --container-runtime=containerd
E0214 03:22:09.896259 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:22:27.953755 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-181239 --memory=2048 --driver=docker  --container-runtime=containerd: (30.407953576s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-181239 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-181239 -n scheduled-stop-181239
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-181239 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-181239 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-181239 -n scheduled-stop-181239
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-181239
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-181239 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-181239
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-181239: exit status 7 (94.680877ms)

                                                
                                                
-- stdout --
	scheduled-stop-181239
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-181239 -n scheduled-stop-181239
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-181239 -n scheduled-stop-181239: exit status 7 (80.720538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-181239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-181239
E0214 03:23:50.998577 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-181239: (4.303869643s)
--- PASS: TestScheduledStopUnix (106.45s)

                                                
                                    
x
+
TestInsufficientStorage (11.55s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-446018 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-446018 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.078885862s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"470e9dfd-ebf4-47b8-bf20-3f33933de2dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-446018] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1fba4f33-74cc-46a3-b9e0-fb9e4abdc27e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18166"}}
	{"specversion":"1.0","id":"0ecea37e-0a4e-4077-8d00-2c645087e2bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b97aa209-d960-4fb8-ab1b-6935204fe55a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig"}}
	{"specversion":"1.0","id":"da9f76d3-0a9f-4f17-a0ad-b4dcfe7a4856","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube"}}
	{"specversion":"1.0","id":"70243ba6-c7c5-410d-997f-817a37ff17a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"93b56be4-28fe-4ba6-8407-39eabf0f6980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7cff5c1b-9c44-44f5-bbd6-a4241793e3ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c6fe8c13-809f-4b8a-922a-3d403ddff322","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3c12e022-fac8-4337-acdf-6074d2078f34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c054cf61-36be-4aeb-9f49-820250bd307c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0d4d1b06-0c9f-4943-b6ae-648960be858d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-446018 in cluster insufficient-storage-446018","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f8b8de6-673e-4784-9a39-4b7c302b38e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb0a6369-ce07-47ee-9053-fb38d5d52531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3def682-b5ad-4553-bde0-e3ef678a0b81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-446018 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-446018 --output=json --layout=cluster: exit status 7 (304.005456ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-446018","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-446018","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 03:24:03.810631 1241436 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-446018" does not appear in /home/jenkins/minikube-integration/18166-1129740/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-446018 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-446018 --output=json --layout=cluster: exit status 7 (295.524073ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-446018","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-446018","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 03:24:04.105114 1241487 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-446018" does not appear in /home/jenkins/minikube-integration/18166-1129740/kubeconfig
	E0214 03:24:04.115405 1241487 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/insufficient-storage-446018/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-446018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-446018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-446018: (1.874186427s)
--- PASS: TestInsufficientStorage (11.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2579420740 start -p running-upgrade-796609 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0214 03:30:26.833223 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2579420740 start -p running-upgrade-796609 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.533447335s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-796609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-796609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.271219926s)
helpers_test.go:175: Cleaning up "running-upgrade-796609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-796609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-796609: (3.285103568s)
--- PASS: TestRunningBinaryUpgrade (87.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-699660 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0214 03:26:49.880262 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-699660 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m9.509168528s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-699660
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-699660: (1.400347037s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-699660 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-699660 status --format={{.Host}}: exit status 7 (98.862036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-699660 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0214 03:27:09.896358 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:27:27.953714 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-699660 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.337785277s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-699660 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-699660 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-699660 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (149.674896ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-699660] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-699660
	    minikube start -p kubernetes-upgrade-699660 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6996602 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-699660 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-699660 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0214 03:32:09.895548 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-699660 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.260434604s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-699660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-699660
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-699660: (2.473744892s)
--- PASS: TestKubernetesUpgrade (385.38s)

                                                
                                    
x
+
TestMissingContainerUpgrade (167.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1204097496 start -p missing-upgrade-770494 --memory=2200 --driver=docker  --container-runtime=containerd
E0214 03:25:26.832850 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1204097496 start -p missing-upgrade-770494 --memory=2200 --driver=docker  --container-runtime=containerd: (1m30.952176929s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-770494
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-770494: (10.517717648s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-770494
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-770494 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-770494 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.426874266s)
helpers_test.go:175: Cleaning up "missing-upgrade-770494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-770494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-770494: (2.460253193s)
--- PASS: TestMissingContainerUpgrade (167.30s)

                                                
                                    
x
+
TestPause/serial/Start (95.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-975528 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-975528 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m35.950946512s)
--- PASS: TestPause/serial/Start (95.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318871 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-318871 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (115.280024ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-318871] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318871 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318871 --driver=docker  --container-runtime=containerd: (42.390155049s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-318871 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318871 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318871 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.055825281s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-318871 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-318871 status -o json: exit status 2 (320.277025ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-318871","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-318871
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-318871: (1.982014393s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318871 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318871 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.789010246s)
--- PASS: TestNoKubernetes/serial/Start (7.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-318871 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-318871 "sudo systemctl is-active --quiet service kubelet": exit status 1 (297.632436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-318871
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-318871: (1.226996185s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318871 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318871 --driver=docker  --container-runtime=containerd: (6.520113837s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-318871 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-318871 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.244787ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-975528 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-975528 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.49349938s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.51s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-975528 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-975528 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-975528 --output=json --layout=cluster: exit status 2 (483.176879ms)

                                                
                                                
-- stdout --
	{"Name":"pause-975528","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-975528","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.48s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-975528 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.34s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-975528 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-975528 --alsologtostderr -v=5: (1.338529584s)
--- PASS: TestPause/serial/PauseAgain (1.34s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-975528 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-975528 --alsologtostderr -v=5: (3.179517974s)
--- PASS: TestPause/serial/DeletePaused (3.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-975528
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-975528: exit status 1 (22.991271ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-975528: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.956627584 start -p stopped-upgrade-760828 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.956627584 start -p stopped-upgrade-760828 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.255259468s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.956627584 -p stopped-upgrade-760828 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.956627584 -p stopped-upgrade-760828 stop: (20.07270184s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-760828 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-760828 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.357492905s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-760828
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-760828: (1.080812064s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-252586 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-252586 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (344.287077ms)

                                                
                                                
-- stdout --
	* [false-252586] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:32:26.261571 1280526 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:32:26.268485 1280526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:32:26.268531 1280526 out.go:304] Setting ErrFile to fd 2...
	I0214 03:32:26.268580 1280526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:32:26.270255 1280526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18166-1129740/.minikube/bin
	I0214 03:32:26.271155 1280526 out.go:298] Setting JSON to false
	I0214 03:32:26.272920 1280526 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22492,"bootTime":1707859054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0214 03:32:26.273071 1280526 start.go:138] virtualization:  
	I0214 03:32:26.281141 1280526 out.go:177] * [false-252586] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 03:32:26.283395 1280526 out.go:177]   - MINIKUBE_LOCATION=18166
	I0214 03:32:26.285665 1280526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:32:26.283526 1280526 notify.go:220] Checking for updates...
	I0214 03:32:26.287720 1280526 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18166-1129740/kubeconfig
	I0214 03:32:26.290040 1280526 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18166-1129740/.minikube
	I0214 03:32:26.292197 1280526 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:32:26.294358 1280526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:32:26.297102 1280526 config.go:182] Loaded profile config "force-systemd-env-051392": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0214 03:32:26.297216 1280526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:32:26.324552 1280526 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:32:26.324664 1280526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:32:26.444184 1280526 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-14 03:32:26.433476973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:32:26.444287 1280526 docker.go:295] overlay module found
	I0214 03:32:26.452213 1280526 out.go:177] * Using the docker driver based on user configuration
	I0214 03:32:26.454408 1280526 start.go:298] selected driver: docker
	I0214 03:32:26.454427 1280526 start.go:902] validating driver "docker" against <nil>
	I0214 03:32:26.454441 1280526 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:32:26.456815 1280526 out.go:177] 
	W0214 03:32:26.458804 1280526 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0214 03:32:26.460517 1280526 out.go:177] 

                                                
                                                
** /stderr **
E0214 03:32:27.953676 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-252586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-252586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-252586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-252586"

                                                
                                                
----------------------- debugLogs end: false-252586 [took: 5.29734067s] --------------------------------
helpers_test.go:175: Cleaning up "false-252586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-252586
--- PASS: TestNetworkPlugins/group/false (5.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (121.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-707832 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0214 03:35:12.942901 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:35:26.833740 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-707832 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m1.887983627s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (121.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-707832 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b511469b-8425-4551-8ed0-00525e1cec64] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b511469b-8425-4551-8ed0-00525e1cec64] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00314903s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-707832 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-707832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-707832 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-707832 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-707832 --alsologtostderr -v=3: (12.080541958s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-707832 -n old-k8s-version-707832
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-707832 -n old-k8s-version-707832: exit status 7 (82.26874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-707832 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (662.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-707832 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-707832 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m1.819867306s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-707832 -n old-k8s-version-707832
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (662.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-951329 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0214 03:37:09.895587 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:37:27.953726 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-951329 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m17.512633479s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-951329 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8e9f4d1b-24c4-4122-95b4-dca2d8efb7a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8e9f4d1b-24c4-4122-95b4-dca2d8efb7a6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.006720063s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-951329 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-951329 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-951329 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03055026s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-951329 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-951329 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-951329 --alsologtostderr -v=3: (12.026142897s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-951329 -n no-preload-951329
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-951329 -n no-preload-951329: exit status 7 (83.142361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-951329 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-951329 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0214 03:40:26.833173 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:40:30.999407 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:42:09.895538 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:42:27.953741 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:43:29.881125 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-951329 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m36.536573511s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-951329 -n no-preload-951329
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dpd7v" [594945e4-22b7-4fc9-8fb9-71d973109be4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dpd7v" [594945e4-22b7-4fc9-8fb9-71d973109be4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.007926479s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dpd7v" [594945e4-22b7-4fc9-8fb9-71d973109be4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003926367s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-951329 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-951329 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-951329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-951329 --alsologtostderr -v=1: (1.055352042s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-951329 -n no-preload-951329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-951329 -n no-preload-951329: exit status 2 (368.308308ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-951329 -n no-preload-951329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-951329 -n no-preload-951329: exit status 2 (330.179797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-951329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-951329 -n no-preload-951329
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-951329 -n no-preload-951329
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-981721 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0214 03:45:26.833217 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-981721 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m24.727506977s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-981721 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [910f46fa-2a09-4a59-8c55-4b732548f39d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [910f46fa-2a09-4a59-8c55-4b732548f39d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004002859s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-981721 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-981721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-981721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.09523516s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-981721 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-981721 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-981721 --alsologtostderr -v=3: (12.101356938s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-981721 -n embed-certs-981721
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-981721 -n embed-certs-981721: exit status 7 (82.52399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-981721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-981721 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0214 03:47:09.895818 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-981721 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m38.419399381s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-981721 -n embed-certs-981721
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-z2cvn" [dd9af51d-66bd-4640-aa6f-1325194cba7d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0037984s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-z2cvn" [dd9af51d-66bd-4640-aa6f-1325194cba7d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00326322s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-707832 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-707832 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-707832 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-707832 -n old-k8s-version-707832
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-707832 -n old-k8s-version-707832: exit status 2 (374.886168ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-707832 -n old-k8s-version-707832
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-707832 -n old-k8s-version-707832: exit status 2 (344.482679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-707832 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-707832 -n old-k8s-version-707832
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-707832 -n old-k8s-version-707832
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-183606 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0214 03:47:45.412623 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:45.417873 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:45.428162 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:45.448452 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:45.488749 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:45.568964 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:45.729303 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:46.050141 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:46.690370 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:47.970970 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:50.531973 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:47:55.652678 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:48:05.893806 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:48:26.374274 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-183606 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m0.308138353s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-183606 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d53a770-4199-4eac-ac57-b7d43a87695a] Pending
helpers_test.go:344: "busybox" [3d53a770-4199-4eac-ac57-b7d43a87695a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d53a770-4199-4eac-ac57-b7d43a87695a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005596937s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-183606 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-183606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-183606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059343983s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-183606 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-183606 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-183606 --alsologtostderr -v=3: (12.084412493s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606: exit status 7 (85.122996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-183606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-183606 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0214 03:49:07.334728 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:50:26.833159 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:50:29.255604 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:50:46.888360 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:46.893729 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:46.903972 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:46.924288 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:46.964526 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:47.044883 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:47.205419 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:47.525995 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:48.166844 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:49.447670 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:52.008903 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:50:57.129529 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:51:07.369742 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:51:27.850516 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-183606 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m39.744853915s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-258rm" [21ce4b55-1165-4e11-9a04-28f90aa776d7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-258rm" [21ce4b55-1165-4e11-9a04-28f90aa776d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004412855s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-258rm" [21ce4b55-1165-4e11-9a04-28f90aa776d7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005870651s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-981721 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-981721 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-981721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-981721 -n embed-certs-981721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-981721 -n embed-certs-981721: exit status 2 (355.117423ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-981721 -n embed-certs-981721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-981721 -n embed-certs-981721: exit status 2 (358.612204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-981721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-981721 -n embed-certs-981721
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-981721 -n embed-certs-981721
E0214 03:51:52.943633 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-294231 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0214 03:52:08.810894 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:52:09.895546 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:52:27.953352 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-294231 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (45.037697405s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-294231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-294231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.206252863s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-294231 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-294231 --alsologtostderr -v=3: (1.323300183s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-294231 -n newest-cni-294231
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-294231 -n newest-cni-294231: exit status 7 (84.669554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-294231 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-294231 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0214 03:52:45.412918 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
E0214 03:53:13.095794 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-294231 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (32.129655141s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-294231 -n newest-cni-294231
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-294231 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-294231 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-294231 -n newest-cni-294231
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-294231 -n newest-cni-294231: exit status 2 (336.558213ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-294231 -n newest-cni-294231
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-294231 -n newest-cni-294231: exit status 2 (332.054901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-294231 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-294231 -n newest-cni-294231
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-294231 -n newest-cni-294231
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0214 03:53:30.731262 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (58.793420682s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-252586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-252586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dgrk7" [53daf99f-c018-4cda-8a1a-45f43292986f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dgrk7" [53daf99f-c018-4cda-8a1a-45f43292986f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004531663s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-252586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sdxnk" [44820bfa-1859-49d5-9db5-f52e715079db] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sdxnk" [44820bfa-1859-49d5-9db5-f52e715079db] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005497174s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sdxnk" [44820bfa-1859-49d5-9db5-f52e715079db] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004331876s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-183606 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-183606 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-183606 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-183606 --alsologtostderr -v=1: (1.107771248s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606: exit status 2 (396.484831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606: exit status 2 (440.528701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-183606 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-183606 --alsologtostderr -v=1: (1.082325741s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-183606 -n default-k8s-diff-port-183606
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.19s)
E0214 04:00:43.315842 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m33.409030775s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0214 03:55:26.833232 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
E0214 03:55:46.888992 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
E0214 03:56:14.571584 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m20.284713s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kbnlk" [878520da-a4c2-4c45-8f85-11d9065f94bd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004982108s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lhwkl" [933782b2-22a0-414c-9a4c-85e0838f1c16] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004242998s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-252586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-252586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lmnbc" [1c04ffa2-d562-4a2d-b83a-5526f2d8d23c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lmnbc" [1c04ffa2-d562-4a2d-b83a-5526f2d8d23c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003939849s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-252586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-252586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fgxf2" [1026c92a-c536-405d-8618-0c7f537e96d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fgxf2" [1026c92a-c536-405d-8618-0c7f537e96d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004437178s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-252586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-252586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m7.15060146s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0214 03:57:09.895982 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/addons-107916/client.crt: no such file or directory
E0214 03:57:11.000025 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:57:27.953141 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/functional-991896/client.crt: no such file or directory
E0214 03:57:45.412994 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/no-preload-951329/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m30.498088085s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-252586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-252586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gsx2n" [d954938f-4e70-4b61-baa1-1618acc779f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gsx2n" [d954938f-4e70-4b61-baa1-1618acc779f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003749385s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-252586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-252586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-252586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-88tfm" [124531b4-7182-4a0d-ad19-99893ca0932f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 03:58:40.865820 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/default-k8s-diff-port-183606/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-88tfm" [124531b4-7182-4a0d-ad19-99893ca0932f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003905922s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.613264393s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-252586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0214 03:58:51.106258 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/default-k8s-diff-port-183606/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0214 03:59:21.395118 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:21.400355 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:21.410608 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:21.430856 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:21.471090 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:21.551351 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:21.711714 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:22.032264 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:22.673301 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:23.953646 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:26.513999 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:31.634700 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:41.874904 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
E0214 03:59:52.547255 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/default-k8s-diff-port-183606/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-252586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m28.740863222s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ng6fk" [d8898088-a5c4-4799-99b7-a6ecd39fe133] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00406978s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-252586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-252586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gjbw4" [70427d82-492c-4682-9d3d-148f761fc408] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 04:00:02.355609 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/auto-252586/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gjbw4" [70427d82-492c-4682-9d3d-148f761fc408] Running
E0214 04:00:09.881750 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/ingress-addon-legacy-089373/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003992451s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-252586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-252586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-252586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qlnhq" [6590e0dd-442d-4124-9f49-cf8495369490] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 04:00:46.889252 1135087 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18166-1129740/.minikube/profiles/old-k8s-version-707832/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-qlnhq" [6590e0dd-442d-4124-9f49-cf8495369490] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005000136s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-252586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-252586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.71s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-935155 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-935155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-935155
--- SKIP: TestDownloadOnlyKic (0.71s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-062058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-062058
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-252586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-252586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-252586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-252586"

                                                
                                                
----------------------- debugLogs end: kubenet-252586 [took: 4.906614415s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-252586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-252586
--- SKIP: TestNetworkPlugins/group/kubenet (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-252586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-252586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-252586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-252586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252586"

                                                
                                                
----------------------- debugLogs end: cilium-252586 [took: 4.937178305s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-252586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-252586
--- SKIP: TestNetworkPlugins/group/cilium (5.13s)

                                                
                                    
Copied to clipboard